Wednesday, November 2, 2011

Detecting APT Attackers in Memory with Digital DNA™

HBGary’s Digital DNA™ system is an alternative to traditional signature-based approaches to detecting malicious backdoors. While the “APT is not Malware” mantra is common, APT commonly use malware. To be precise, APT is just a hacker in the network. Remote access to the network is guaranteed only through stolen VPN credentials, or through the placement of a remote access tool (RAT) – in other words, malware. So, enter DDNA.

DDNA is designed around generic detection of subversive code. To do this, HBGary disassembles everything on-the-fly and pushes it through a sieve of regular expressions that match against control flow and data flow features. I thought it would be fun to delve into some specific examples.

As Martin recently pointed out in his blogpost, APT has started to use in-memory injections as a means to hide code. We have noticed remote-access functions injected and split over a range of memory allocations.

In the screenshot, you can see a dozen 4K (0x1000) allocations injected into explorer.exe. (Note: this type of activity can be detected using the free Responder CE.) Each page of memory only contains a tiny portion of the overall malware – something that would frustrate most AV scanners. However, the allocations themselves are suspicious to Digital DNA™, and in particular the last page has a suspicious code fragment that scores quite heavily in Digital DNA™. This illustrates why a filesystem-only view is not sufficient to detect APT tools. Many advanced techniques involve modifications to the running system and can only be detected in memory.

In this example, the hacker hasn’t hooked anything. Instead, he starts some additional threads to service the malware code. Even though the malware has been split over a dozen pages, the hacker has only started two threads. In this example, allocations #8 and #11 each host a thread subroutine. The other memory pages each hold specific subroutines. For example, one of the memory pages has a function for installation into the registry, while another has a function for hiding a copy of the malware in an alternate data stream. It’s these suspicious behaviors that Digital DNA™ is focused on detecting. Furthermore, it’s the behaviors being used together that will really light up color-coded DDNA alerts.

One suspicious feature is when code exists outside the bounds of a known module. This will occur if the hacker allocated additional space for storing an injected routine. This is commonly done using VirtualAllocEx(), but can also be achieved using the stack of an injected thread. In the latter case, CreateRemoteThread() is used with a stack size argument large enough to store an injected routine. In either case, executable code is detected outside of a defined module, and this will score as suspicious by default even without further analysis.

Moving further, however, injected code is typically handwritten assembly. In most cases, the operational code will not resemble known compiler patterns (such as code compiled by Visual C++ or Borland). In particular, the code may contain position-independent operations – function calls and data references that are designed to work independent of the address where the code lives in memory. These are further indicators of suspicion. In my experience, the only time this kind of code appears in a legitimate binary is when DRM is being used (DRM looks and smells like malware anyway).

To look back at our example, it had some interesting techniques for embedding data inline with code:

In the example, you see the “w32_32” string in use, but what makes this interesting is how the string is embedded inline to the code. Right before the string we see a short call that jumps over the string, and code execution continues on the other side. Again, this idiom is suspicious and can be detected generically, as opposed to reliance on a specific string or byte pattern.

In the case of Digital DNA™, code 16 30 detects short calls and jumps over inlined networking related strings. How did we get here? HBGary detected that some APT groups were producing this code pattern as a result of some code-level anti-forensics tools. This is exactly the kind of pattern that produces big wins on the detection side as the code is often cut-and-paste or the obfuscation is applied in batch to otherwise custom-compiled malware. (Of course, now that I’ve blogged about it they will switch off to another trick – it’s OK, we have thousands of traits to detect suspicious behaviors).

Another example of handwritten code is the CRC function used by the hacker to load his table of function pointers. This CRC-based technique has been around in shellcode for a long, long time (digression: I think I released the first public CRC loader in shellcode in the early 2000’s – it was 32-bit CRC. Thinking back, Halvar Flake publicly released a better and smaller 16-bit CRC loader in shellcode shortly afterward. The technique has been written about many times since).

The routine that actually calculates the CRC is usually hand-made – so it too can become a form of attribution. But even if it’s not hand-made, the proximity of CRC to a GetProcAddress() call would be indicative of this pattern. In our APT example, the author has created a CRC for loading a function table:

The CRC calculation is referenced from a routine that is rolling through KERNEL32.DLL and calling GetProcAddress(). This pattern screams for attention “Hey! I’m malicious!”

So again, Digital DNA™ for the win. The CRC can be detected using a generic method, and when detected in control flow in proximity to GetProcAddress() loop, it scores hot with trait C3 F7.

These are just some examples of how Digital DNA™ focuses on analyzing the code itself, as opposed to blacklisted MD5’s or ASCII strings. It is not possible to specify these behavioral patterns with simple languages like OpenIOC or even ADXML (Active Defense’s XML for scan policies) – they can only be detected programmatically. That is why our product Active Defense doesn’t depend on IOC’s alone to do the job – in fact, Active Defense starts with full physical memory analysis and Digital DNA™ sequencing. IOC’s come second and only if the user wants to extend the default detection capability with custom threat intelligence. The two methods work well together, Digital DNA™ to detect new and unknown threats, and IOC’s as a follow-up sweep for known APT behaviors.

Using IOC’s effectively

One of the reasons we invented Digital DNA™ is because IOC’s alone aren’t good enough. A problem arises when IOC’s are only used to detect known threats. Think about this – if your IOC’s are just a blacklist of recently discovered malware MD5’s and unique strings then its equivalent to a small AV dat file. Even though IOC’s can be used to detect TTP’s (i.e., scanning the enterprise for split RAR archives or recent use of ‘net.exe’) we generally see them employed to detect specific malware files. If your organization has a database of IOC’s then look for yourself. How many entries have MD5 checksums? How many are specific to a malware sample, a specific registry key used to survive reboot, etc? If you see an overabundance of these signatures then beware – this is the same old blacklist-driven security model that has been failing us for over 10 years now. On the other hand, if you are using IOC’s to scan for more generalized things, such as command-line usage, access times on common utilities, executables in the recycle bin, etc., then you are on a far better trajectory. I support open intelligence sharing, but I caution you against falling into the “magical strings” bucket. Too often our industry shares threat intelligence in the form of blacklisted MD5’s or IP addresses – this kind of threat intelligence is nearly useless.

HBGary’s managed services team generates many IOC’s in the course of their work, and I am happy to say that we share all of them with our Active Defense customers – we don’t keep them secret. They are provided automatically in the form of a library that is auto-updated. Customers can pick and choose from many search definitions and use these as a basis to create their own custom searches. Our team tries to steer away from malware-specific indicators, and instead focuses on the generic attack patterns that can be detected at the host. We give these to our customers because we want them to get the most from our software. We enable people to be self-reliant.

When you use Digital DNA™ and IOC’s together, you aren’t relying on a “magical bag of strings” that go stale every two months. Instead, you are detecting new threats and then using IOC’s to apply attrition against the attacker’s persistence. This is a strong defensive position. This is why our proven behavior-based solution approach is increasingly winning us new customers – even unseating our competition in many accounts.


Thursday, September 22, 2011

APT - The Plain Hard Truth

The survivors from the front line have reported in. We stand on the ridge, a tangled mess of bodies behind us. We are the ones who have chased the demon, descending into the binary pit the users call the “enterprise”, and climbed up the other side. What we have seen is not pretty. The collective corporate filesystem is a parking lot for castaway software barely able to run on modern operating systems, squeezing the last bit of life out of burned out win32 DLL’s. There are big piles of unwashed garbage downloaded by employees that were passing by, never deleted, never clean. The strangest mutated crap has been swept tightly into temporary directory corners that have since calcified and become permanent.

More than a single digit percentage of these software programs are a biohazard. Some are just plain broken, wheezing out juice from a hooked windows message chain just long enough to cough up and die, only to be resurrected by the swift kick of a boot-time registry key the next time the machine reboots. Some have pretty little labels of well-known companies – clearly so you won’t look twice at them and notice how they are exfiltrating personal browsing statistics and other data to some cloud server – really like malware but allowed by the EULA that you didn’t read. Some of these things don’t seem to have any purpose but to act as a low-fidelity binary listening device.

Everything looks bad. So, it’s no wonder that hackers can just plug something new in and nobody notices. As long as it doesn’t infect five million residential banking customers then nobody is going have a description of the suspect. That is the reality of hacking today, and it has nothing to do with advanced persistent threat. It has to do with the enterprise and the complete LACK of control you have over the endpoint. When security is limited to the network perimeter, you are not in control. Oh, and what a breath of fresh air the mobile device is. A new pile of software, mostly social media, that is directly connected to thousands of strangers that are not your employees, communicating in real-time with processes running within your defensive wall. In effect, you now have thousands of potential multi-homed routers to 3G-space* from your network that don’t belong to you.

*4G if your lucky

Here are some basic security facts:
  • Today, malware is a tool for persistent adversaries
  • Adversaries are financially or politically motivated
  • Intrusions involve a real human being or hacking group that targets your organization directly (*)
  • Attackers are motivated to steal something from your network
*Somehow in the mid-2000’s it seems like the security industry lost its way and forget about the basic tenants of Hacking Exposed – unfortunately you cannot condense a set of MD5 checksums out of the hacker problem.
Recently during presentations I have outlined three primary threat groups we face today. I have illustrated the evolution of these in the following diagram.

A. Criminal Enterprise – these are the guys who make more money than drug cartels and the reason a malware economy emerged over the last few years. This is what mere mortals mean when they talk about malware, and the reason people get malware and hackers mixed up all the time.

B. Rogues – these are the hacking groups that you can enumerate on any given day. There are hundreds, if not thousands worldwide. These guys are all capable. The graph expands much slower than criminal enterprise because they aren’t fueled by cash. As early as 2000 these guys were already defacing, DDOSing, and partaking in ‘mostly harmless’ hackery. Yet, a small subset have always been deeply malicious and get pleasure out of destroying things. Others pick up a cause and act like cyber terrorists. And still others really are cyber terrorists.

C. Rogues meet cash - these hired mercenaries are the ones who write malware, sell zero day, and get sucked into the vortex of organized crime. These guys are very, very dangerous.

D. The problem today - all the membranes have been breached - the threat is blended. We live in a time where a state interest can simply buy access to adversary networks from criminals who are selling their botnets. Where state sponsored attacks can be vectored through private hacking groups. Where private hacking groups can fund their operations from cybercrime, while targeting corporations and governments with methodology indistinguishable from APT. There is no tidy bucket to place the threat, all the wires are now crossed. The only thing that is consistent here is that hacking is hacking, and it always looks and smells the same when you see it. This is why the term ‘APT’ is so tired.

E. Private hackers working for the man - when you catch a Chinese malware in a DoD contractor network, it almost always looks like it was written by a “kid”. This “kids” malware is then used to steal the plans for a weapons program that can only have value to the PLA. All the security vendors looking at APT come up with corny little codenames for all the hacking groups (HBGary included), but at the end of the day it’s all the same thing.

F. Thank God for APT - a board room level term that we can all use to cover our you-know-what when we tell the man our millions of dollars in security spending has done nothing for us.

If you want a no-holds-barred, no excuses, and no-snakeoil analysis of APT and the reality of countering it, you should check out HBGary’s new whitepaper The New Battlefield.


Wednesday, September 7, 2011

Social Terrorism

Social networking does something to people, intoxicating them with near-zero accountability for impulsive behavior protected under a banner of free speech. Fierce defenders of the social media revolution think that because this technology is novel, somehow it should be afforded a special layer of protection. Social media empowers people, but it shouldn't make free speech apply to all forms of the 'fire in a crowded theatre'. Thankfully there are policy makers and courts who still feel that inciting violence, organizing illegal activities, causing riots, partaking in slander and libel, or harassment and abuse is wrong and/or criminal in nature regardless of the medium of communication.

New forms of 'fast and wide' communication technology have effectively armed common citizens with an information warfare tool. This is fine, but handle with care. Like any real tool of value, it can cut you. This is not a free speech issue, it's one of safety. When BART wants to shutdown communications due to threat of riot and crime, it's their right to do so. When Philadelphia wants to put a curfew in place to stop flash mobs, they are protecting the citizen. When authorities in London want to curb-stomp looting they should be able to do things like shut down riot tweeters. When the NYPD runs an intelligence group to hunt down terrorists and criminals on Facebook and Twitter, it's their right to do so - in fact, it's THEIR JOB to do so. If you are dumb enough to put your personal information on the 'net and then commit crimes, fair play (as Lulzsec has learned). Social media companies have a responsibility to work with government, law enforcement, and private authorities to ensure that they aren't enabling damage. Terrorists using Twitter are still terrorists.

When someone falsely claims a bomb threat, they are committing a crime. When they do it on Twitter, they are still committing a crime. As two people just learned in Mexico, putting it on Twitter doesn't make it legal. And, several men were jailed in the UK for using Facebook to incite violence during the riots. And today it's common for cases to be won against cyber bullying. Yes, embrace social media, but don't think that entitles people to be assholes.


Tuesday, August 16, 2011

Inside an APT Covert Communications Channel

Note: I shortened the title of the post from "Inside an APT “Comment Crew” Covert Communications Channel" to "Inside an APT Covert Communications Channel". To be clear, multiple threat groups are using HTML comments as a means of COVCOM. Thus, this should be considered a general technique as opposed to attribution on a specific group. Both Shady RAT and "Comment Crew", as well as others with additional codenames, have been associated with the use of HTML comments as a means of COVCOM.

For many years, hackers operating out of China have been attacking a myriad of commercial and government systems here in the US and abroad. The term “APT” or Advanced Persistent Threat has often been used to describe these attackers. While HBGary is primarily a product company selling an enterprise incident response product, the team has been deep into APT analysis for over five years. Most of the analysis work is in direct support of Digital DNA – an automated system for detection of unknown malware and APT intrusions. I presented a technical description of how this attribution works, what is solves and what it doesn’t, at the BlackHat Conference last year. The work is about tracking threat groups – that is, tracking the humans and the human factors behind the digital artifacts we see. There are many hacking groups involved in these intrusions. One such group has often been called “Comment Crew” for their use of HTML comments as a means of command and control. This group has been associated with the recent “Shady RAT” intrusion revealed by McAfee. For this article I am going to give you a technical in-depth tour of how such a group operates.

For starters, the attackers will gain access to the network via spear-phishing. In almost all cases we have investigated, spear-phishing was the initial point of infection. These phishing emails are full of very specific project names, names of associates, official sounding documents, etc. It is very clear that the hacking group is using stolen email to learn about their targets before crafting a very convincing email. This underscores why the recent spate of SQLi attacks over the last few months pose a far greater threat than most people realize.

Exploit and Dropper

Once access is gained into the network, the hacking group places remote access tools into the environment. These are backdoor programs that are downloaded automatically by the exploit email – we called these “droppers”. In the diagram, point A shows the exploit email ‘detonating’ after being viewed by the victim, point ‘B’ is a server where a ‘dropper’ is stored, and point ‘C’ is the dropper backdoor being placed onto the compromised computer.

Once the dropper has established a beachhead into the network, a hacker will access the host and uninstall the original backdoor, replacing it with a new and more powerful backdoor. These backdoors, especially the secondary and more powerful one, are called “RAT”s – for Remote Access Tool. Many of these RATs are custom written and that can be the basis for a great deal of attribution, allowing us to detect the malware in physical memory.

Interaction with the Host

Remember that most networks are firewalled. This means the attacker can’t just make a TCP connection into the RAT program. The RAT program is within the internal network so it must first make an outbound connection to the attacker. The RAT is designed to connect outbound over port 80 or 443, a port that is allowed outbound by almost all firewall policies. Once the outbound connection is made, the attacker can use the established TCP session to interact with the host, download tools, run command line programs, and laterally move about the network. In the diagram, point A is where the RAT makes an outbound connection to a server on the Internet, point B is a server under the hacker’s control, and point C is where the hacker uses the established TCP connection to interact with the RAT program and subsequently the host environment, potentially exploiting additional machines nearby in the network.

One of the greatest challenges for an incident response team is discerning the difference between ‘normal’ malware and an APT attack. As we can see in this example, an APT attack involves a real human at the other end of the keyboard performing actions on the host. We call this ‘interaction with the host’ and we recommend that an IR team pull a timeline of last-access times from the MFT (master file table), browsing history from index.DAT, event log, and other sources to determine if such interaction is occurring. This is a fast and easy way to discern the difference between a non-targeted external threat (which over 80% of all adverse events will fall into this category) and external targeted attacks (of which APT is included, probably less than 2% of all adverse events).

The RAT program doesn’t contain any fancy stealth or anti-forensics measures. In fact, we rarely even see packers in use (a packer is a method of obfuscating a program after compilation and is a low-cost way for a hacker to add anti-forensics to his malware). It seems the most of the covert methods are applied to the way to RAT communicates with the hacker. This makes sense. Consider that most of the intrusion detection capability lies at the perimeter of the network, and this is what the hacker is trying to defeat. Thus, the HTML comment method of configuring and controlling the RAT programs.

Hidden Comments for Covert Communication (COVCOM)

Instead of letting the RAT connect directly to his personal server, the hacker will first exploit a webserver somewhere on the Internet. This exploited webserver will then be used as the ‘middleman’ to communicate with the RAT. The hacker will place a hidden comment on an otherwise normal webpage and have the RAT connect outbound to this page. Using the hidden comment, the hacker will be able to give commands to the RAT. The RAT will make periodic outbound connections, sometimes waiting days before checking the page. The hidden comment will contain an encoded message that the RAT knows how to decipher. In this case example, the hidden data is base64 encoded. In this diagram, point A is the RAT program making a periodic outbound connection, point B is a compromised webserver somewhere on the Internet, point C is the hidden comment on the webpage, and point D is where said comment is decoded into actual instructions for the RAT. An example of such a comment is shown in the next image. It is interesting to note that the hacker has attempted to make the page look like a 404 HTML error page if viewed in a normal web browser.

Example of BASE64 Encoded Hidden Comment

Once the RAT decodes the message, the data becomes a configuration file for the malware. The file has many features, such as the ability to specify which server addresses to use on the Internet, including backup servers, configuration of the check-in times, and even has the ability to completely update the RAT binary in the field (shown in the diagram as a .bmp file – this is actually a normal PE header executable).

The Decoded Configuration File

All of the above technical information can be detected on a host after intrusion. The RAT program itself is near trivial to detect once you know what you are looking for. But beyond that, because the RAT program has certain outbound connection characteristics, sleep timers, and built-in “host interaction” capabilities, HBGary’s Digital DNA lights it up like a Christmas Tree (example shown in image).

Digital DNA Detects Unknown Malware

Even if you had no prior knowledge about this specific RAT, you would have detected it with HBGary. Beyond that, the decoded configuration file can also be found in physical memory – the primary search method used by Active Defense. Regardless of the configuration values, the option headers shown in the example above have a specific pattern that can be detected quite easily, even if fragmented over multiple buffers. This is exactly the kind of information I am referring to when I talk about “actionable threat intelligence”. Once you know about the attackers TTP’s (tactics, techniques, and procedures) you can encode this into an enterprise-wide scan. We call it ‘continuous protection’ when you adopt continual scanning while also updating the threat intelligence as you learn more about the attacker. In essence, you are applying attrition against the attacker’s presence in your network. For example, if you know how to detect the above configuration file, then the attacker has to change the way that configuration file looks to defeat you – something that also requires them to recode their parser in the malware. Hence, you cost the attacker time and money. That is a Good Thing.

I hope this gave you a somewhat concrete tour of how a real APT covert communication (COVCOM) channel works. Also, I hope it has illustrated some of the threat intelligence that you access on the host. Using enterprise-wide scans, your IR or security team can put a severe dent in the APT presence in your network. As far as product solutions to enable you, obviously we build HBGary’s Active Defense. If you are interested in continuous protection and threat intelligence, we offer 50-node evaluations of Active Defense that can be installed on a laptop. We also offer a deploy-on-demand license for incident response teams (our 500-node pack has been quite popular), as well as the perpetual node model for full enterprise proactive deployments.


Monday, August 15, 2011

Shady RAT is Serious Business

Ira Winkler makes some interesting points in his CIO article on Shady RAT. I tend to agree with his observation that security vendors spend too much energy infighting when we all should be facing a common enemy. It is true that Shady RAT is just one of many other, similar attacks. There is no harm in trying to draw attention to the elephant in the room - APT is a grave and serious threat to U.S. companies as well as national security. Shady RAT may appear to be 'sloppy' but it can still be APT. Within infosec the term APT has been debated - but we at HBGary have a very simple definition: if there is interaction with the host, we call it APT. Now, most of the attacks we deal with are targeting intellectual property and appear to have state sponsored underpinnings. The attackers usually leave tools behind, additional backdoors, etc., but none of these are very complex. The malware and techniques are mostly unsophisticated and sloppy, but yet they succeed and remain persistent. Our assumption on this - APT does the minimum necessary to get the job done. If they don't need hard core boot sector viruses and kernel rootkits, they aren't going to use them. We as an industry have a responsibility to protect our customers from a very serious and evolving threat. Downplaying the seriousness of this threat undermines the reason we are here.


Tuesday, August 9, 2011

Command Line Programming with Responder PRO

One little known feature of HBGary’s Responder product is that it ships with the full source code to a command-line version. This command-line version of the product can be customized for automated tools, batch processing, and statistical utilities. HBGary is still working to produce an 'official' documentation on the SDK, but in the meantime I figured I would walk the more adventurous of you through some code.

First you need Microsoft Visual Studio. I use VS2008 Pro Edition with version 3.5 SP1 of .NET. In the SDK subdirectory of your Responder installation, you should find the ITHC directory. Just a backstory, but ITHC means Inspector Test Harness Client – it was originally a test harness used by our QA team that eventually proved so useful for batch processing that we included it for customers. The code is written in C#.

When I first opened the .sln file on my Responder install, I found that the project file needed some tweaking. Your mileage may vary, but here are some steps I had to take. First, the references to all the Responder DLL’s were broken. By editing the .csproj file I was able to fix this. The trick is to use a HintPath variable with a relative path to the main install directory, which is two folders above the ITHC directory (see image). I’m not sure why it shipped this way, but alas I was able to fix it.

Fixing the references

Now, in most cases, I like programming in Debug mode so I can single step, use breakpoints, inspect variables, etc. I ran into a snag with my debug build and had to get one of the HBGary engineers to take a look. Again, it was a configuration thing. When you make build settings, the platform will probably be set to AnyCPU. You will need to set the platform target to x86 (see image). This has something to do with mixed mode code and if you don’t set this to x86 you will get a binding error when you attempt to run the ITHC exe. Lastly, I set my output path so the ITHC.exe ended up in the main Responder install directory (see image).

Setting the platform target

Setting the output path

Running the tool requires some precise command line arguments (see image). The project path needs to be as shown path/projectname/projectname.proj and the path to the memory image needs to be fully qualified. If you want to change any of that, you can edit the code in NewProject() and OpenProject() to parse the path differently. At this point I had a fully functional ITHC.exe that would analyze Windows physical memory snapshots.

Command line parameters to the tool

Most of the analysis magic happens in THCAnalyzeFile(). The project file ends with the .proj extension and this will be created or opened if it already exists. There is also a .tmp file that contains cached lookup data for Responder which only exists after an analysis. THCAnalyzeFile() will handle all of this.

At this point I need to explain packages and classes. In Responder, a package is any binary object. For example, the physical memory snapshot is a package. Every extracted livebin is also a package. If you import a file for static analysis, that file is considered a package.

Both packages and classes can have parent/child relationships. The difference is that a class is simply a container without any associated binary data. Think of it as just a folder. In fact, in the Responder GUI, classes are shown as folder icons. Just remember that packages can have child classes, classes can contain other classes, classes can contain packages – there is no restriction on the way you nest these objects.

Around line 249 in the ITHC example you will see the creation of the root package (see image). Every project has a single root package that everything else will reside under. Usually this package has no associated binary object and is simply a placeholder. We usually set this to the name of the forensic case – such as “Case 04321”. In Responder’s GUI, the root package is always shown with a safe icon. Depending on the project type, a class will be created directly under this root package. The name of this class is very important and affects the kinds of things Responder will let you do. So, for a physical memory analysis you need to name this first class "Physical Memory Snapshot". You will see this created around line 266.

root package, bulk update, named attributes

Now just a word on event management. Responder has a robust event alerting system that will post an event to your code whenever an object is modified. You could subscribe to these events and be notified if the user changed a property of an object anywhere in the GUI, for example. But, there is a flipside – if you make a large number of changes all at once you will flood the system with these messages. Most of the time if you are going to change a bunch of objects all at once, you want to disable events for a short time. To do this, you use the BeginBulkUpdate() and EndBulkUpdate() methods. You will see these in use around line 249 (see image).

Around this same section of code you will also see named attributes being set on the case. These attributes are being applied to the root package, the one that shows up as a safe icon when you view it in Responder’s GUI. Any object, including packages and classes, can have named attributes set. The attribute system is typed and the first letter of the name indicates the type. See my previous post on plugin development for a description of these.

Around line 293 you will see the creation of a second package. This package is the one associated with the physical memory snapshot. It is placed under the root node and folder. You will also see the creation of something called a snapshot that is then linked with the package. This is how you link a binary to the package – via the snapshot object. The snapshot is just a small header of metadata that is associated with the binary file – including the path to the file – and this is set as the “.InitialSnapshot” property of the package. After this step, the package and the binary are linked.

package and snapshot for the physical memory image

The most important function is then called – the AnalyzeMemory function (around line 329). This function performs the bulk of the memory analysis. It returns true or false depending on whether it understood the memory snapshot. Just a note; it will return false if you don’t have a valid license. If you have the free version of Responder CE, you still have a license file that must be present or this call will bail out on you.

After analysis is complete, the analysis history is updated to include “WPMA”. This tells Responder that “WPMA” analysis has already completed, so it won’t attempt a second analysis later. Note: WPMA means Windows Physical Memory Analysis. Responder has other analysis types that can be added to this history. You can also add your own for reference later.

Now that analysis is complete you can parse the datastore, query all the found windows objects, processes, modules, etc. You can also query the DDNA results if you are using the Pro version. Some object types, such as control flow, disassembly, dataflow, graph objects, and recon traces are only available in the Pro version. However, the results of the windows memory analysis are fully available in all versions, including the free CE version. See the THCDumpProject() function for more information on parsing the project’s object tree.

Package: ws2_32.dll
Parent Package: svchost.exe
Length: 0 bytes.
Class: Symbols
Class: Strings
Class: Report Items
Class: Global
Package: vmwaretray.exe
Parent Package: VMwareTray.exe
Length: 0 bytes.
Class: Strings
Class: Global
Class: Report Items
Class: Symbols
Package: msctf.dll
Parent Package: IEXPLORE.EXE
Length: 0 bytes.
Class: Strings
Class: Symbols
Class: Global
Class: Report Items

a short snippit of output from the THCDumpProject() function

For those of you using the Pro version, ITHC includes examples of not just physical memory analysis, but also extraction of livebins and code-level analysis of extracted livebins. If you made it this far, then take a look at AnalyzePackage(), AnalyzeExtractedPackage(), and ExtractPEImageFromMemory() to get more familier with the code level analysis features. I hope that I can write some more specific posts about these features in the near future.

ITHC.exe analyzing a memory snapshot

Because the ITHC utility is written in C# it’s very easy to interface to other systems. Microsoft has done a good job building a robust set of API’s that can be used for SQL database access, serializing files, communicating over the web or TCP/IP, regular expressions, etc. All of this is at your fingertips and can be interfaced with the results of physical memory assessments. I am partial to building bulk analysis tools for large directories of memory snapshots. You are only limited by your imagination.

The SDK directory should be in your Responder install directory. If you are using the free Community Edition you may not have the SDK directory. In this case you can download the SDK as a small but separate download from the free tools section on HBGary's support site. Visit for more information.

Tuesday, July 26, 2011

Asymmetric Warfare and Cyber Terrorism

In the newly released document, “DoD Strategy for Operating in Cyberspace", the Pentagon states that “while the threat to intellectual property is often less visible than the threat to critical infrastructure, it may be the most pervasive cyber threat today.” Pervasive, yes – but not necessarily the most dangerous.

In 2003, I founded my company, with the help of the federal government’s Small Business Initiative Research (SBIR) program, to develop products to counter these advanced unknown, stealth cyberthreats today often referred to today within the security community as Advanced Persistent Threats (APT).

While the APT threat is significant, the attacker can take months or even sometimes years to steal the information. However, the recent attacks made by small hacking groups illustrate a highly more tangible, immediate, and potentially more severe form of economic damage. It is appropriate to classify these acts as asymmetric warfare, and possibly as a type of cyberterrorism.

In contrast to APT threat actors and other traditional cyber criminals, cyberterrorists are not motivated by monetary gain. Instead, the cyberterrorist wants to cause grave harm or economic damage as quickly as possible, and to get attention for it. Attacks may be economic, political, or even shutting down the power in the dead of winter. The technical aspects of the attack may be similar to APT, but the intent and goal is wholly different.

Cyberterrorism first was a buzzword in the late 90’s associated with power outages and explosions orchestrated over computer networks. These types of attacks seemed like the digital equivalent of IED’s. While traditional terrorists clearly use the Internet to recruit and communicate, we operate under the assumption that the ‘ground of action’ is still the physical world – think suicide bombers. But, recent events have shown that attacks don’t have to be kinetic to cause damage. The ground of action can be entirely in cyberspace and damages can be measured in billions of dollars of stock value and the threats to persons are very real.
Edit: There are different views on the definition of cyberterrorism. In 'Computer Attack and Cyberterrorism: Vulnerabilities and Policy Issues for Congress', Clay Wilson defines two forms of cyberterrorism:

Effects-based: Cyberterrorism exists when computer attacks result in effects that are disruptive enough to generate fear comparable to a traditional act of terrorism, even if done by criminals.

Intent-based: Cyberterrorism exists when unlawful or politically motivated computer attacks are done to intimidate or coerce a government or people to further a political objective, or to cause grave harm or severe economic damage.

Since the early 2000’s, ‘electronic jihadists’ (i.e., Younes Tsouli, Mohammad Peerbhoy, etc) and other hacking groups (many can be researched on have been content with web defacement and the occasional DDOS. But, these actions never gained the media attention like the recent spree of hacks in 2011. This is, in part, due to the advent of social networking. Former British Prime Minister Margaret Thatcher once stated “Publicity is the oxygen of terrorism”. Anyone studied in matters of terrorism knows that the primary goal of terrorism is media attention. The act is secondary to the message.

Younes Tsouli and Mohammad Peerbhoy, both criminal hackers working with Islamic extremist groups (photos via Associated Press)

A small sampling of criminal hacking groups operating in the Middle East. All of these groups are at least as-skilled as the current Lulzsec/Anonymous hackers, as evidenced by similar techniques, use of SQL injection, etc. The myth that traditional terrorist groups don't have access to hacking skill is simply outdated. (groups via

In the words of William Gibson, “Terrorism is ultimately about branding”. Every press release, tweet, and claim is part of that brand to raise awareness for their cause or message. And, the media can function as an extension of the group’s propaganda machine. As TechCrunch columnist Paul Carr recently pointed out in his piece on the media coverage of the now defunct LulzSec group, most journalists were all too happy to hop aboard the ‘Lulz Boat’ and parrot propaganda verbatim without a hint of criticism and provide ‘celebrity fluff’ reporting. Paul especially calls out online journalists and bloggers as “downright shameful” for showing support for these criminal hackers. Gene Spafford, the professor and director at Purdue University and a leading security expert, has also objected to how reporters romanticize criminal hackers, drawing a parallel to computer virus authors in the early 90’s portrayed as “swashbuckling, electronic pirates” (pointing out that their legacy is now costing billions in damages).

Even in recent days, reporters have used lofty, inconsistent terms such as “masked crusaders,” a “loose hacker movement” and an “online activist group” to describe Anonymous. The fear of retribution by the criminal hackers within this group is real. No one wants to become a target. News organizations need to take a step back and take a close look at how they are covering these incidents and make sure they aren't enabling these groups’ propaganda machine.
Edit: as a case in point, notice the significant lack of the word 'criminal' when media reports on Anonymous/Lulzsec. To illustrate, here is how reporters/bloggers described Anonymous in the 24 hours following the Monsanto/Booz Allen Hamilton attacks:

"Online activist collective" - CNET
"hacker group" -- IT Business Edge
"Hactivist collective" -- The Inquirer
"Hacking Group" -- MSNBC
"Hacktivist Group" -- SC Magazine
"Hacker Group" -- WSJ
"Hacker Group" -- Network World/IDG
"Notorius Hactivist Collective" -- The Register
"Group of hactivist computer-savvy hackers" -- Economist
"Loose-hacker movement" -- Forbes
"Masked crusaders" -- Time
"Cyber-activist group" -- Financial Times
"Hacker Group" -- Dark Reading
"Online Activist Group" -- Associated Press
"Hacker Group" -- BBC News
"Hacking collective" -- NY Times
"Hacker Group" -- Washington Post

While the threat landscape is always changing, we must continue to highlight that a real criminal is at the other end of the keyboard, and that he is persistent and will keep coming back. While the DoD outlines some important initiatives for a more secure cyberspace, we, as citizens, also have a role. Just as we all participate in our local neighborhood watch to keep our physical community safe, we, as Internet users, need to be vigilant and work together to ensure our cyberspace remains safe.

-Greg Hoglund

Thursday, June 23, 2011

Scripting with Responder™ Community Edition

One of the most powerful features of Responder (all three versions, including the free Community Edition) is the ability to write custom plugins. The entire application is basically a GUI over an API. You have the ability to access this same API and extend the application in any way. HBGary hasn’t produced an official SDK document yet, so it’s best to learn by example. For this exercise, I am going to illustrate a plugin that ties information from Responder into Google maps.

First, you should become familiar with the object tree. The object tree (shown in the graphic below, point A) illustrates how the data is organized within Responder after a physical memory snapshot has been reconstructed. You can query any of this data directly using the Responder API’s. For example, you could query low-level details about running processes (point B).

For this example, we are going to query the open network sockets. These are reconstructed from internal undocumented structures within the kernel (the same ones used by tcpip.sys and afd.sys). Even if a rootkit is hooking netstat, the data would still be revealed in Responder. In our example, we have some outbound connections to China. Using our plug-in, we are going to read the connection data and plot the location of the registering entity using Google Maps.

To load the script, first go to the script TAB and select OPEN. Once open, the script will be visible in a code-editing window. Press the PLAY button to load the script.

As you can see, the script is written in C#. Almost all of the GUI components in Responder are written using C# and, for those who haven’t tried it, you will find it to be very similar to Java. The language is very easy to learn and use.
After we load the plugin, the list of network connections are obtained along with registration data. The address of the registration is then plotted on Google Maps.

When a plugin is loaded, the OnLoad function will be called with a list of all open “Documents”. In Responder, a “Document” is a container for data. The architecture requires that the user-interface be decoupled from the data. For those of you with programming experience, you may recognize the “Document/View” pattern here. At any rate, the list of open documents is passed into the OnLoad function and we need to locate the “NetworkBrowserDocument”. The network browser document has the list of all open sockets.

public bool OnLoad(ArrayList OpenDocuments)
// get the frame document, this allows us to add menu items and menu bars
_frame = FindMainWindow(OpenDocuments);

// see the Launch() subroutine to learn how to launch your own popup window

// init the whois class for later use
_whois.ResponderForm = (Form)_frame.MainWindowInstance;
_whois.Inspector = FindInspector(OpenDocuments);
// the network browser document gives access to open sockets
_whois.Net = FindNetworkBrowserDocument(OpenDocuments);

For those who want to explore other documents, there are several example plugins that ship with Responder. For example, "StringsBrowserDocument" is responsible for showing lists of strings associated with a livebin. "SymbolsBrowserDocument" is responsible for symbols when a livebin has been disassembled (Responder PRO only). The "DriversBrowserDocument" has the list of detected device drivers.

In this plugin example, we have a helper function defined to locate the network browser document. Notice we use GetType() to locate the actual type of each document in the list. As stated, there are many different document types in Responder, usually one type for every visible window or panel in the application.

Logic.NetworkBrowserDocument FindNetworkBrowserDocument(ArrayList documents)
// note the use of IDocument interface class here,
// use GetType() to compare instanced type against Logic.XXXX where
// XXXX is the document type you are after. Use reflection to see the
// whole list...
foreach (IDocument doc in documents)
if (doc.GetType() == typeof(Logic.NetworkBrowserDocument))
return (Logic.NetworkBrowserDocument)doc;

return null;

After finding the network document we can use it to query the list of sockets. Documents will have custom methods and utility functions for dealing with specific data (these are all different depending on document type). You can also access the raw data directly, usually in the form of name/value pairs (my preferred way to do it). This is shown below. Each attribute has a specific name and type as shown.

ArrayList socks = _net.Sockets();

// all objects are referenced by GUID
foreach (Guid socketEntryID in socks)
// src and dest ip are stored as string
string source = _net.ObjectName(socketEntryID, "sSource") as string;
string target = _net.ObjectName(socketEntryID, "sDestination") as string;

// remember that 'i' is UNSIGNED
UInt32 sourcePort = (UInt32)_net.ObjectName(socketEntryID, "iSourcePort");
UInt32 targetPort = (UInt32)_net.ObjectName(socketEntryID, "iDestinationPort");

// the src and dest DNS names, obviously string as well
string sourcename = _net.ObjectName(socketEntryID, "sSourceName") as string;
string destname = _net.ObjectName(socketEntryID, "sDestinationName") as string;

// a bool stores whether the session is TCP or UDP
bool bTcp = (bool)_net.ObjectName(socketEntryID, "bIsTCP");

string sockType = ((bool)(_net.ObjectName(socketEntryID, "bIsTCP"))) ? "TCP" : "UDP";

The socket list is stored as a list of object ID’s. Responder uses a GUID to identify every object in the project database. Every object that is found in the physical memory snapshot is assigned a GUID and can subsequently be looked up. In this example, we have a list of objects which represent sockets. The object ID can then be used to query additional attributes. In this example we query “sSource” “sDestination” “iSourcePort” etc. This is the generic attribute naming system used by Responder. The prefix is a type. ‘s’ means string, ‘i’ means integer, 'b' means bool. There are hundreds of these named attributes across the application - something I hope HBGary writes an SDK document for soon.

After obtaining the source and destination IP’s, our example plugin has a Whois class that is used to lookup the name and address of the registrar. This data is then passed to a browser control along with the URL for Google Maps so the location will be mapped on the right.

This plugin could be extended in many ways. For example, a geoip database or service like ip2location could be used to locate the missile-coordinates for a specific IP address, as opposed to the registration data. The plugin could also be extended to extract IP addresses from artifacts in memory, as opposed to active connections in the socket list. For example, IP address fragments stored in tagged page pool memory.

The plugin is open source and can be downloaded from HBGary’s support site.


Ps. Thanks to Dean, the HBGary engineer who wrote this plugin

Wednesday, June 15, 2011

Changing APT Tactics: Remote-Access Tools vs. Stolen Credentials

Advanced Persistent Threats (APT) are adaptive, their tactics will cycle after an intrusion takes place. For example, an APT group may start to lean away from RATs (remote-access tools) and rely more on stolen credentials. Let me explain.

An APT initially will enter the network via malware, typically through spear-phishing. Once on the compromised host, the threat actor will place one or more RATs into the environment. If we pick up RATs with our Digital DNA solution or another indicator, we start hunting them down.  After targeting and removing these RATs in the customer environment, we have found that specific malware will last about a week, maybe two, before the APT drops it altogether and switches tactics to remain in the network. We commonly see APT shift to using stolen credentials and no malware at all.

Stolen credentials are the very currency of APT. As it turns out, it’s much harder to detect malicious users than to detect RATs. In fact, the APT will use these accounts the same way a legitimate admin would – making it very hard to tell the difference. They create file shares,  use the ADMIN$ share, and defrag the hard drive. APTs will even update the AV and patch the machine. Of course, the defrag is actually a way to cover up forensic evidence on the drive, and the ADMIN$ is a way to laterally move malware and tools between machines. One would think that upgrading the AV would be counter to an APT’s self preservation. Actually, the APT updates it purely for self preservation – to appear “normal” as a legitimate admin.

At this point in the investigation, in terms of malware, we are still picking up a great deal of material – but not RATs. When the APT shifts to credentials, we start to pick up password sniffers and keyloggers that have no outside network capability. The malware in this case is entirely focused on obtaining more credentials. Finally, once the customer updates all the passwords, one or more RATs pop out of the woodwork and the cycle repeats itself. 


Wednesday, May 25, 2011

A Brief History of Physical Memory Forensics

Lately, we have been doing a lot of work around physical memory forensics. Recently, we released the free, community edition of our Responder™ product and plan to release the fourth generation of our memory analysis engine later this year. During this work, I have been reflecting on the origins and advancements in the field of physical memory forensics over the last 10 years.

In the early 2000’s, two headline-making malware infections, Code Red and SQL Slammer, demonstrated the possibility that malware could reside only in memory and never leave a file on disk. In the world of incident response, the evidence challenged the traditional notion of dead-box forensics. It meant that critical data would not be obtained by the traditional forensic methodology. It also set the stage for future malware that would subvert API calls, forcing live response scripts to rely on the OS as little as possible.

Physical memory analysis started as crash dump analysis for debugging, but it soon became apparent that volatile data in memory could contain encryption keys, passwords, and other critical information about recent user activity. From a tools perspective, the well-known dd utility has been able to acquire memory from the start, simply by reading /dev/mem or /device/physicalmemory. Other memory tools also emerged. In 2002, Eoghan Casey documented how Arne Vidstrom’s PMDump tool could be used to dump virtual memory and defeat PGPTray.

Rootkits helped drive development of memory forensics –more for malware detection than evidence collection. In 2003, Jamie Butler demonstrated the DKOM (Direct Kernel Object Manipulation) method for hiding processes by removing items from a linked list directly in memory. This was a data-only attack and didn’t involve any kernel hooking. It would be a few years before researchers like Andreas Schuster and Chris Betz developed memory-forensics methods for finding hidden processes that countered Butler’s DKOM . Things took another significant step forward in 2005 when Sherri Sparks released Shadow Walker, a rootkit that was able to hide sections of virtual memory from scanning tools. This lead to the notion of physical memory acquisition – using a raw dump of RAM instead of using OS- supplied virtual memory reads – as a means for rootkit detection.

Attempts at OS reconstructions didn’t really start until the DFRWS memory analysis challenge in 2005, where George Garner [kntlist] and Chris Betz [memparser] developed process and thread reconstruction for Windows®. Everything changed after this – instead of searching for binary patterns and strings, the memory image was seen as a complex snapshot of interrelated structures and data arrays. A keystone development was the ability to discover the page tables in physical RAM and thus translate virtual addresses to their physical offset. In February 2006, I wrote the first version of this technology for HBGary using the self-referencing physical address pointer trick (AFAIK first publically documented by Joe Stewart w/ the TRUMAN project), and we soon added PAE support. Physical memory forensics had become a hot new area of research. Later that year Mariusz Burdach presented on physical memory forensics at the Blackhat conference. Jamie continued his research as well and presented numerous advances in physical memory analysis to detect rootkits at the Blackhat 2007 conference. Shortly after Jamie’s talk, AAron Walters released Volatility. It were these initial advances with page table translation and OS reconstruction that lead to ”modern” physical memory analysis.

By this time, Brian Carrier and Joe Grand had already released Tribble, a PCI card that could monitor and analyze physical memory. It was later that several commercial attempts were made to build a rootkit protection solution in the form of a PCI card. Via a DHS grant, HBGary was subcontracted to work on a similar project and this lead to a prototype PCI card that could analyze Windows XP and detect kernel hooks. Jamie Butler joined Komoku, which had already built a similar device, around that time. Joanna Rutkowska was quick to respond to all of this and developed an extremely low level software-only rootkit for Windows that could defeat even a PCI-based physical memory read – by reprogramming microchips that are part of the bus controller and I/O chipset. In the end, a hardware solution for rootkit detection was not economically feasible and these projects were never successfully commercialized.

HBGary’s work on the hardware PCI card was the genesis for more R&D memory forensics work to come. We abandoned the hardware approach and developed a software library called WPMA (Windows Physical Memory Assessment) - written in C++ and core to Responder’s memory parser. We later developed a second-generation parser and started reverse engineering all the different memory footprints left by every conceivable version of Windows and service pack (we didn’t analyze NT 4.0 – only Win2K and newer). It took about two years to get the Windows platform complete. This work led to the development of our flagship product, Responder™, and the library that performs the physical memory parsing is integrated into our enterprise product’s Active Defense™ agent as well.

I’ve highlighted only a few of the researchers in this important field of physical memory forensics – there are many others who have also made significant contributions. At HBGary, as I mentioned, we will soon release a completely rewritten version of our physical memory analysis engine marking the fourth generation of the technology. Recently, I was watching the performance testing in the lab and I have yet to see it cap 150 MB memory usage while analyzing a 10-gig snapshot, and it is about 30% faster than our current generation. I will post more details on this work as we progress, as the new engine has many additional features that extend our Digital DNA™ technology.

-Greg Hoglund

Thursday, May 12, 2011

Stop PDF Exploits Cold

I’m happy to announce that HBGary has released another free tool, similar to the Aurora scanner and the Chinese RAT catcher tools we released in past months. This one isn’t looking for malware, however. Acroscrub is an agentless scan of the enterprise that will find out-of-date versions of Acrobat Reader. Adobe is pretty good about patching vulnerabilities, but many machines in the enterprise won’t have the latest version of Acrobat Reader. PDF exploits are a common method used with spearphising attacks and APT intrusions so it’s imperative that organizations keep this software up to date. HBGary has released many popular free tools over the years and Acroscrub is another cool addition to the toolbox.

All of the existing free tools are available to users on the HBGary support site. We have upgraded the security on the community support site and now require two factor authentication for all access, both for commercial customers and for free tools, so that means no more direct downloads. I support this upgrade to authentication and believe it acceptable for legitimate practitioners in the security industry.

Tuesday, April 19, 2011

Is APT really about the person and not the malware?

Maybe the “APT is person not malware” pendulum is swinging to the extreme. Understandably it’s a response to commercial enterprises being obsessed with pure-play malware detection. But what is the alternative? Spend tons of money on consulting and RE/forensic services for years on end? Customers are tired of paying for that. They must build a security methodology that accounts for persistent attackers – something that can be managed internally and that leverages automated detection as much as possible. To that end, detecting APT must include the malware, tools, and codified threat intelligence.

As tired as it is, the ‘hacking exposed’ story hasn’t changed. We must continue to highlight that a real criminal is at the other end of the keyboard, and that he is persistent and will keep coming back. We know that he will use more than one tool, more than one method of entry, and he won’t go away no matter what kind of malware detection you have. But the idea that it’s all about the human and not malware or TTP’s is simply untrue. Malware and TTP’s have a critical role to play in combating APT.

To date this year, HBGary has identified and tracked multiple human threat actors using the science of attribution, many of them operating overseas. Our attribution begins with profiling the CnC, the developer toolmarks, and forensic artifacts left behind after an intrusion. While some RAT’s are “easy to detect - difficult to attribute” (i.e., poison ivy) we have also found modified and custom tools that contain unique indicators. This information can be used along with open source intelligence and link analysis (we heart Maltego) to locate online identities, forums, and social spaces. This can lead to the discovery of real identities – the attacker’s real name, address, and even photographs.

It makes no sense to separate the human from the malware and TTP’s. They are two ends of the same spectrum. This is not a black and white science; it works because humans aren’t perfect. It works because humans are creatures of habit and tend to use what they know. They use the same tools every day and don’t rewrite their malware every morning. They don’t have perfect OPSEC. They put their digital footprints out on the Internet long ago – and it’s usually just a few clicks away from discovery. There is a reflection of the threat actor behind every intrusion. To discount this is to discount forensic science.

Digital attribution is important because it scales. An army of consultants watching your network does not scale, they don’t share their threat data, and they’re expensive. Couple that with out of date methods for determining a breach (imaging a 500GB hard drive to find 200 bytes of actionable data) and you can see why customers want/need a better solution to empower their own teams. This is why researching automated methods for threat detection is so important. Threat detection leads to threat intelligence, actionable data you can feed back into your process to make it more difficult for the attacker to succeed in your network. For example, the endpoint physical memory can reveal decrypted CnC addresses that can plug directly into the perimeter IDS – making your existing investment smarter.

For me, the concept is clear – reverse engineer the endpoint hosts down to the rawest dataset. From this, automatically piece together the parts that appear to relate to suspicious activity. Map this against a database of known malicious behaviors – software, host, timeline, forensic, all of it. Do this automatically and alert on the outliers. HBGary’s Digital DNA does this by using a weighted fuzzy hash of the behaviors. Fuzzy hash because hashes are understood in the enterprise, and weighted because security is a risk management problem that begs for red/yellow/green. The result is huge scalability and effectiveness for a problem that is traditionally expensive and understaffed.

Tuesday, April 12, 2011

Two new threat intelligence papers CSO's will want to read

Industrial Espionage in the Global Energy Market

Since 2005, HBGary has been tracking variants of malware created and originated in China that indicate a complex cyber espionage operation targeting multiple industries, including the energy sector. In this new whitepaper, "Industrial Espionage in the Global Energy Market," HBGary provides technical details about these cyberattacks as well as the type of critical data targeted and successfully obtained and sent back to China. This report is restricted release to qualified executives, government, and law enforcement only. Available from

Threats in the Age of WikiLeaks

HBGary has released its threat report ‘Threats in the age of WikiLeaks’ – CSO's will want to read this report. Cyber-threats are evolving fast but we must stay ahead if we are to secure our information systems and our brands. With leak platforms (WikiLeaks, AnonLeaks, CrowdLeaks, InfoLeaks, People’s Liberation Front) comes the increased risk of insider threats and acts of information terrorism. Unlike traditional APT which damages over years, leak platforms represent immediate damage to stock value, profitability, and brand. Acts of cyber terrorism can disrupt systems and business continuity. To date, the severity of this threat has been underplayed in the press – this report exposes the true and dangerous nature of the threat. The report provides immediate and actionable data to help you detect potential insider threats and attacks. This report is restricted release to qualified executives, government, and law enforcement only. Available from

Friday, April 8, 2011

Rootkit Evolution

Over the last few years HBGary has researched significant advancements in rootkit technology. We are pushing the envelope of what’s possible in the windows kernel. I’m glad to say that we haven’t seen anything in the wild that is remotely close to what we have developed in our labs. So, we are still ahead of the threat. This keeps our Digital DNA ‘frosty’ so-to-speak, but probably further ahead of the threat curve than it needs to be. That’s not a bad thing for people protecting against APT – we want to stay one step ahead of the bad guys. For those who have followed my work in rootkits over the years you probably noticed I stopped releasing public material on the subject years ago. This is because I didn't want to educate the bad guys on how to develop this stuff. But, that doesn’t mean the research has stopped – just that some things should only be briefed behind closed doors.

Monday, March 14, 2011

Cyber Conflict and State Power

There has been a rapid change in the global security paradigm. Cyberspace has fundamentally changed the stability between state and society. New conflict groups are not tied to any one state. There is a boom in conflict. Dangers come from many sources, not just military. The distinction between civilian, domestic, guerilla, terrorist, and criminal is blurred – small numbers of individuals can inflict great harm upon the establishment – perhaps more-so than any army. Recent activities have been directed at states themselves (Egypt/Iran/US/Estonia/Georgia). International bodies have been notably absent in their duties to protect its members (UN/NATO).

The security environment is defined by the state’s weakness in cyberspace. The borders are permeable because the information flow is weakly controlled – there is no better example than Wiki-Leaks. The threat today is not from the projection of power, but instead from the projection of instability. Power projection defines a state's ability to influence and enforce their policy globally, which can be seriously harmed by not applying equal effort in cyberspace (Georgian conflict). You need a passport to travel to a foreign land but can reach that country's marketplace in milliseconds via cyberspace, without ever crossing a checkpoint. Any group can influence a state's population using social media outlets, including but not limited to instigating riots or uprisings (Egypt/Iran), as well as spreading disinformation.

The U.S. war on terrorism is an example of this fight. The shadowy cell-based terrorist network cannot be linked to any one state. We live in an increasingly borderless world system. Groups are recruited and mustered entirely on the international stage of cyberspace, and include members from many countries. New conflict actors are flocking to cyberspace for communication, organization, and as a medium of attack – both directly through criminal assault and through influence campaigns and control of media. Threat actors include transnational criminals, warlords for profit, economic insurgents, state intelligence, and agents of industrial espionage.

Cyber is a zone of lawlessness and conflict. While not armed in the traditional sense of explosives, the landscape is ripe for soft munitions that can alter industrial operations with a few lines of code (Stuxnet). The traditional means of peaceful activists have migrated to acts of criminal nature, favoring methods such as denial of service, intimidation, theft, harassment, defamation, disinformation, hacking, and cyber-thuggery. Peaceful protests such as sit-ins or boycotts have been replaced by violations of Federal statutes without fear of prosecution, and states are increasingly challenged to bring charges against the perpetrators due to the ability to exploit the world stage of cyberspace.

When the citizens of one nation wage cyberwar against the government of another, the international treaties that trigger the right to wage war (jus ad bellum) are absent, and the conduct of protecting a nation under these acts are not governed (jus ad bello).

The implications of all nations not cooperating to develop and enforce regulations, treaties, extradition, and establishing cyber checkpoints will continue to occur with increasing severity.

-Greg Hoglund