Just as deception has
enabled success on the kinetic battlefield for millennia, it’s gaining a place
in the defensive cyber toolkit for many sophisticated organizations. Most
cybersecurity professionals are familiar with honeypots and their more sophisticated
cousin the honeynet. These are usually deployed for one of two purposes:
adversary detection and research. As an adversary detection mechanism,
honeypots are expected to entice an attacker into connecting under the premise
that a legitimate user would never connect to a system that doesn’t offer any
real services on the network. As a research tool, honeypots are used to observe
the behavior of attackers and collect malware and tool samples. By allowing an
adversary to break into the system, researchers can gather data about the
latest tactics and techniques used by attackers who might be targeting their
This article will focus on crafting
deceptions for the defense. Honeypots and honeynets aren’t the only tools of
deception available to cyber defenders. Deceptions can be created using
practically any means available, and the utility of deception as a defensive
tool extends far beyond detection and research. However,
unlike other cybersecurity tools, deception doesn’t come with a user guide.
Many defenders struggle to deploy deception effectively, and this has resulted
in a loss of stakeholder confidence in deception as a worthwhile use of
resources. By examining specific difficulties that many organizations experience
with deception tactics, defenders can gain insight into the conditions required
for success in cyber deceptions. This insight can then be combined with a basic
deception methodology and a bit of adversary focus to craft effective cyber
deceptions that enhance security and lead attackers down the path to defeat.
Deceiving an adversary requires an ability
to anticipate the conclusions an observer will draw in response to indicators
provided by the deceiver. That is, defenders must have the ability to see
through the adversary’s eyes, think like the adversary, and decide as the
adversary would. In cases where defenders have the insight to do all this, a deception
may still fail to achieve its intent due to a lack of measures of
effectiveness, a lack of integration with the rest of the defensive concept, or
even due to cognitive and logical failures.
Lack of measures of effectiveness
difficulty that many organizations experience with cyber deception is the
inability to determine whether the deception has succeeded. This takes away
their ability to control the deception, which results in a waste of resources on
ineffective lines of effort. During Operation Fortitude, the Allies leveraged a
network of double agents to feed the Germans disinformation about the upcoming
invasion.  Just as the loyalty of a double agent flows in both directions,
their existence allows information to flow in both directions as well. Allied
war planners were able to judge the effectiveness of their deception by the
type of information Germany requested from its agents. When the Germans began
tasking double agents to gather information about fictional military units,
Fortitude’s planners knew they’d succeeded. This feedback loop was also
critical in identifying ineffective elements of the deception, because the
Allies could assume that the Germans would not request additional detail on
reports that they did not consider credible.
Most cyber deceptions lack this feedback
loop. Consider an organization that deploys a honeynet in order to distract
intruders on their internal network from legitimate high value targets. If
defenders never observe anyone interacting with the honeynet, they cannot
conclude that their network hasn’t been breached. Instead, it may be the case that an adversary has recognized
the honeynet for a deception and avoided it. It may also be the case that an
adversary has failed to observe the honeynet altogether and instead discovered
and exploited their intended target without being detected. The fact that
defenders usually can’t observe attacker behavior directly means that they must
devote extra effort to devising appropriate measures of effectiveness to avoid
wasting resources on ineffective deceptions.
Lack of integration
While deception has been noted as a key
pillar in many historical battlefield successes, it has no ability to succeed
on its own. Indeed, the one characteristic in common among each of the
anecdotes discussed in this article is the fact that the side that successfully
employed deception had the resources required to exploit their success. Few
deceptions are strong enough to convince a dominant attacker to avoid battle
altogether or to convince a defender to yield to an inferior force without a
fight. The same is true for effective cyber deceptions. Defenders must already
have implemented a robust concept of defense rooted in a comprehensive security
monitoring and incident response capability before resorting to deception.
Many organizations, desperate to quickly
deploy any mechanism that might increase their chances of detecting or
deterring an attacker, fail to do this. Instead, they use cyber deception piecemeal
and in an uncoordinated way by focusing on simple tricks like changing service
banners, using unexpected names for privileged accounts (shunning the “admin”
and “root” user names), and deploying honeypots. While these ploys are
genuinely clever and useful, they can’t thwart a determined attacker for any
length of time on their own.
Defenders should think of cyber deception
as an obstacle- it may slow an attacker, but it won’t stop a determined one
without backing from additional defensive measures. U.S. military doctrine offers
an additional parallel between obstacles and a well-integrated cyber deception: “The effectiveness of obstacles is enhanced considerably when
covered by observation and fire.”  The reason for this is straightforward.
Consider a wire fence intended to block enemy maneuver. If defenders didn’t
have the fence under observation with the ability to fire at those who
approached it, nothing would stop an enemy from simply cutting the wire and
breaching the fence. The same is true of cyber defenses. Defenders must
diligently watch and maintain their deception mechanisms and keep them
synchronized with the rest of the defense in order for them to perform their
desired function in altering the behavior of an attacker. When a deception has
succeeded, defenders must also be prepared to exploit their success immediately
in order to eradicate intruders or protect critical resources.
The reasons a deception succeeds are deeply
rooted in the hopes, fears, prejudices, and vanities of an adversary. The
reasons a deception fails are usually attributable to the deceiver. In order to
predict an attacker’s reaction to each element of a deception and to the
deception story as a whole, the defender must be able to anticipate the
reasoning and conclusions of the attacker. Empathizing with an adversary well
enough to project how he will react in
the future requires considerable insight that cyber defenders have little
opportunity to gain. When the defender has enough information about an attacker
to draw conclusions, he may still fail in developing an effective cyber
deception due to one or more failures in logic. The following are a few
cognitive challenges that thwart cyber defenders in crafting an effective
Mirror-imaging is the assumption by a
defender (conscious or unconscious) that the adversary thinks the same way as
the defender himself.  Mirror-imaging derails the entire analysis of a
deception plan from the assessment of the way an adversary will interpret
observed indicators, and instead results in an assessment of how the defender would react under the same
circumstances. Would-be deceivers who make the mistake of projecting themselves
into the adversary’s shoes only succeed in deceiving themselves.
Rational Actor Hypothesis
The rational actor hypothesis, closely
related to mirror-imaging, assumes that an adversary not only reasons carefully
before taking action but also that his definition of rational behavior matches
that of the defender. The rational actor hypothesis perverts the defender’s
expected outcome of a deception and may affect future analyses if observed
adversary behavior causes defenders to conclude that, in fact, an attacker is
not rational. Instead, the adversary might just be following different
standards of behavior or even specific rules of engagement. Defenders should
remember that their list of resources on the network that should be considered
high-value probably doesn’t match the attacker’s list, and attackers
additionally have their own agenda that almost certainly isn’t what the
defender thinks it is.
Target fixation results when defenders become so focused on a particular
issue that they fail to observe the “big picture.” Second World War military
aviators on ground attack runs were sometimes observed to concentrate so hard
on targets that they missed the ground itself rushing to meet them, and crashed
into it.  Similarly, defenders thinking deeply about a single attacker or
defended asset in creating a deception may misinterpret the attacker’s
intentions or neglect considering risks to other organizational assets. In
this, even successful deceptions may prove irrelevant when they fail to divert
an adversary from his true objective. Worse, by becoming enamored with the idea
of tricking an attacker, defenders may devote an unwarranted amount of
available time and resources to planning and deploying a deception that can’t
deliver a reasonable return on investment. Target fixation can also result in
the failure to integrate a deception appropriately, as discussed above.
Cyber deception has become so fashionable
within the past few years that there now exist several complete Linux distros packed
with dozens of standalone tools assembled with the express purpose of enabling
easy deployment of deception mechanisms. These are all freely available for
defenders, and they range in capability from simulating a single SSH server all
the way to simulating entire networks of specific industrial control systems.
Unfortunately, this wealth of available tools is not accompanied by a similarly
sophisticated body of knowledge describing how to assemble an effective
deception. Many defenders don’t recognize this gap and seem to equate the
ability to download and run a tool with the ability to deploy an effective
deception. This is not the case.
While crafting an effective cyber deception
requires careful planning and integration within a larger concept of cyber
defense, it also requires a methodology to guide planning and deployment. The
U.S. military employs such a methodology to assist with development of
deceptions that support combat operations. Although developed with kinetic
warfare in mind, imaginative defenders can apply it to the cyber domain without
Crafting an effective deception begins with
a simple premise: Successful deceptions are those which not only cause an
adversary to believe that the
deception is true; he must also act
or fail to act in a way that results
in a benefit to defenders. An effective deception follows a straightforward
methodology: See, Think, Do. 
Defenders must show or allow an adversary to
perceive some information, condition, or event to create a deception. The
narrative that a defender expects the adversary to weave together upon being
exposed to the deception is known as the deception story. The individual for
whom the deception is being created is known as the target. Note that targets
are always individuals. Organizations and computers don’t have brains and
therefore cannot be deceived.
that are shown to an adversary as part of the deception are known as indicators.
An indicator can be as simple as the banner presented to an attacker when he
connects to a service or as complex as falsified technical drawings and
business plans. Indicators are created using two basic activities- ruses and
A ruse is a simple trick designed to
deceive an adversary.  Ruses are useful for creating single indicators in
support of a larger deception plan. Altering the service banner of a web server
to cause it to identify itself as a different version is one example of a simple
A display showcases a collection of
indicators to realize a deception story.  Displays can include ruses,
simulations, disguises, honeypots, honeynets, or any other mechanism useful for
conveying information that supports the deception.
This portion of the deception should be
informed by intelligence about the capabilities of the expected adversary when
possible. This allows defenders to ensure that they don’t expend resources on
indicators that the adversary isn’t capable of perceiving. During the Second
World War, the Allies mounted a series of deception operations (collectively
known as Operation Cockade) throughout 1943 intended to convince the Germans
that a cross-channel invasion was imminent in order to prevent them from
exerting additional pressure on the Soviet Union by transporting forces to the
east.  One component of this deception was a series of commando raids into
northern France (Operation Forfar) to capture German soldiers for
interrogation. The intent was for defenders to interpret the repeated raids as
a reconnaissance effort in advance of an actual invasion near Boulogne. Forfar deceived
no one. Many of the raiders failed to come ashore due to strong defenses, and
those that did were unable to do little more than cut a few sections of barbed
wire. In the end, the Germans didn’t even realize that the raids had occurred-
a complete failure to perceive the deception story.
Defenders should also consider their own
capabilities for this portion of the deception plan. Many cyber deception
discussions focus on honeypots and honeynets, but any source of information
used by an adversary can be useful in a deception. For example, attackers are
known to gather information about the types of systems used by an intended
target by perusing job listings and by reading the job responsibilities of
current employees on sites like LinkedIn. Defenders might consider creating
fake job postings for staff to work on systems that they don’t actually have.
They might also create fake employee social media profiles filled with
irrelevant skills and experience. Fake personas have actually become so common
in online deception that they have a clever name- sock puppets. 
Maxims of Military Deception  – Table 1
|Magruder's principle ||It is generally easier to
induce an enemy to maintain a pre-existing belief than to present notional
evidence to change that belief.
|Limitations to human|
|Human information processing is limited in two general ways that are useful in deception:|
1. The law of small numbers states that decisions which
are based on a very small set of data are inherently low
quality. For example, a person who has never met
someone named Muhammed may believe that this name
is uncommon. However, Muhammed is the most
common male name in the world.
2. Deception targets are susceptible to conditioning.
That is, they are generally unable to detect small
changes in indicators over time even when the
cumulative change is large. For example, a gradual
increase in the volume of network traffic might
successfully hid data exfiltration from defenders.
Repeated false alarms
desensitize deception targets before an event. |
Deception becomes more
difficult as the number of channels of information available to the target increases. Each additional channel increases the target’s
ability to discount the deception story.|
|A choice among types of deception||
Where possible, the
objective of the deception planner should be to reduce the uncertainty in the mind
of the target, to force him to seize upon a notional worldview
as being correct- not making him less certain of the truth, but more certain of a particular falsehood.|
There are circumstances
where deception assets should be kept in reserve, awaiting a more fruitful
use- despite the costs of maintenance and risk of waste. |
|A sequencing rule||
Deception activities should
be sequenced so as to maximize the portrayal of the deception
story for as long as possible. In other words, red-handed
activities- indicators of true friendly intent- should be deferred to the
last possible instant. |
|The importance of feedback||
A scheme to ensure accurate
feedback increases the chance of success in deception. |
|The monkey's paw||
Deception efforts may
produce subtle and unwanted side effects. Planners should be sensitive to such
possibilities and, where prudent, take
steps to minimize these counter-productive aspects. |
|Care in the design of|
planned placement of
Great care must be
exercised in the design of schemes to leak notional plans to the enemy.
Apparent windfalls are subject to close scrutiny and often disbelieved. On the other hand, genuine leaks often occur under
circumstances thought improbable . |
Defenders must determine what conclusions
they intend for the adversary to draw after being exposed to the deception.
Then, they must craft a story that describes the conditions or events that the
adversary needs to be shown in order to reach the expected conclusions. For
example, if defenders want an attacker to believe that they are using Apache
web servers, then they must alter web server banners, so they appear similar to
those presented by Apache servers.
The indicators that the adversary sees as
part of the deception shouldn’t just support the story. Ideally, they will
force the adversary to conclude that the
only explanation for what they are seeing is the story that the defender
wants them to believe. Thus, the best indicators are those with an unambiguous
meaning. However, these are more easily described than created.
The “think” component of the deception is heavily
reliant on the defender’s understanding of potential attackers, and it can be
quickly derailed by the cognitive challenges described earlier. Cultural and
language differences are another potential source of problems. The difficulty
for defenders with conveying the right message in support of a deception is further
exacerbated by the limitations imposed by the cyber domain. The attack surface
presented by the organization and the data contained by compromised systems are
the primary means of communication available for many deceptions, so defenders
must plan carefully in order to convey their deception story clearly and
Defenders should also consider that an attacker might need to be shown a
series of mutually supporting indicators using various means before they draw
the necessary conclusions. Each deception mechanism is referred to as a
channel, and the most convincing deceptions are multi-channel. In the months
prior to 1991’s Operation Desert Storm, American forces constructed an
elaborate deception story to convince Iraqi forces that their main effort would
include direct assaults into Kuwait through the Wadi al Batin and via an
amphibious landing onto the beaches adjacent to Kuwait City.  This
deception was supported by the large-scale movement of troops to the border of
western Kuwait, elaborate training exercises simulating cross-border raids, and
full-scale mockups constructed to resemble Iraqi positions in the alleged
attack area. American troops did this in full view of Iraqi surveillance and
even allowed preparations to be covered extensively by international news media.
(The deception could literally be seen on multiple channels.) Just prior to the
real attack, American forces conducted diversionary attacks into the Wadi al
Batin to convince the Iraqis to move even more forces away from the real attack
area. The aggregate effect of this complex multi-channel deception was for
large portions of the Iraqi military to remain in Kuwait and out of the path of
the real coalition main effort, resulting in one of the shortest ground wars in
Republican Guard vehicles line Highway 8 in Kuwait following a successful deception.
An effective deception must cause an
adversary to do or not do something useful for the defense. Defenders must
identify exactly what they want this to be before they begin planning the
deception. For example, defenders may decide that the best way to detect hidden
adversaries is to channel them to a network segment or host instrumented with
specialized monitoring technologies. We’ll call this the “kill zone.” In order
for attackers to do this, they need to believe that the kill zone contains
something that they want- sensitive data, user credentials, a data storage
location, access to some target system, etc.
The sophistication of the deception story
required to entice this behavior will depend on the sophistication of the
attacker. Advanced attackers who intend to retain access to a network for the
long term will be wary of detection and will likely see through clumsy deception
attempts. Defenders won’t be able to simply plant a sign in the middle of the
network that reads, “Free data this way!” Instead, defenders will need to leave
clues- unencrypted emails that discuss the kill zone, easily stolen credentials
that match hosts connected to the kill zone, apparent backup data transfers to
servers in the kill zone, etc. Some of these clues must be made difficult for
the attacker to acquire, so he doesn’t feel as though he’s being fed
information. If the attacker believes that he has pieced together a puzzle that
defenders didn’t want him to see, he will be more likely to take the action
desired of him by the deception plan. Depending on the attacker’s personality,
he may also act in the desired way after succumbing to pride, greed,
impatience, or even contempt for the “idiots” defending this network who failed
to hide their crown jewels from his clever sleuthing.
The book The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer Espionage,
features a great example of this type of deception. The book’s author, Clifford
Stoll, discusses his efforts to identify a West German hacker lurking in the
network of Lawrence Berkeley National Laboratory (LBL).  Stoll had
difficulty tracing the physical location of the attacker due to the brevity of
his connections to the network and the unpredictability of his visits. After
surmising that the hacker was interested in national defense-related data,
Stoll crafted and planted an elaborate and extensive collection of sizable
documents purported to relate to work involving the United States’ Strategic
Defense Initiative (SDI)- the proposed 1980’s space and ground-based missile
defense system. Besides enticing the attacker to return again and again to LBL’s
network, Stoll also desired for him to remain connected for lengthier sessions
in order to allow the time needed by authorities to complete a trace of the
hacker’s location. The deception was effective. West German police successfully
apprehended Markus Hess, an agent of the Soviet KGB, shortly after he
downloaded the SDI documents.
One other aspect of the “Do” portion of the
deception plan merits discussion. The adversary must actually be capable of
performing the action desired of him. Consider a deception story that is
expected to entice an attacker into trying to hack into a particular Linux-based
system in order to trigger an alert. If this honeypot is protected too strongly
(firewall properly configured, strong credentials for remote access, etc.), the
attacker may not be able to kick down the door to get to the honey, and the deception will probably fail.
Thorough planning is required to realize the
components of an effective deception. “See, Think, Do” may describe the
deception from the perspective of the adversary, but deception planning occurs
in the reverse order. Defenders start by identifying the action they
want the adversary to take- the Do. Next, they determine what an adversary must think in order to decide to take the desired action. Finally, they select indicators for the adversary to see that will cause an adversary to draw
the desired conclusions.
The Cuckoo’s Egg provides some insight into the scale
of effort required for deception planning. Clifford Stoll’s SDI deception took
months to assemble and execute, and he benefited from support from various
government agencies before and during the deception. In addition to the SDI
documents themselves, Stoll’s team created fictitious personas with corresponding
accounts to own the documents on the network, as well as fictitious email
traffic discussing them.
He was also able to develop significant
insight about Hess by observing his activities in real time on a number of
occasions. This allowed Stoll to predict, with a high degree of certainty, what
he needed to show his adversary in order to cause him to act in the desired
way. In this, he benefited from one of the maxims of deception identified by
the U.S. Army. Magruder’s Principle states that, “It is generally easier to
induce an enemy to maintain a pre-existing belief than to present notional
evidence to change that belief.”  Based on the network activity that he
observed, Stoll was able to surmise that Hess believed that LBL was involved in
classified weapons research similar to that conducted by Lawrence Livermore
National Laboratory, a similarly named but separate facility. The See, Think, Do description of Stoll's deception is outlined in Table 2.
Elements of the SDINET Deception from The Cuckoo’s Egg – Table 2
|See||-LBL is a United States National Laboratory.|
-Lawrence Livermore National Laboratory (an unrelated facility with a similar name to LBL) conducts national security research (including nuclear weapons research) for the U.S. government.
-LBL staff are communicating via email about the Strategic Defense Initiative program.
-The LBL network has an account called “SDINET”.
-The SDINET account owns a number of sizable documents with restricted access.
-The SDINET documents appear to contain sensitive data about the SDI program.
|Think||-LBL is engaged in classified government research.|
-LBL's research involves the SDI.
-LBL has stored sensitive data regarding its SDI work on its network.
-The SDINET documents contain valuable intelligence data and are therefore worth the risk of remaining connected to the LBL network for an extended period of time.
|Do||-The deception target will initiate future connections to the LBL network.|
-The deception target will locate documents that appear to be associated with the U.S. government’s SDI program.
-The deception target will attempt to download copies of the SDI documents from the LBL network.
-The deception target will remain connected to the LBL network until document downloads are complete.
Measure performance and effectiveness
Cyber deception plans should also include measures of performance and measures of effectiveness. Measures of performance
tell defenders when their target has observed the deception, while measures of
effectiveness indicate whether the target has accepted the deception story and
is acting in the desired way. Devising measures of performance in the cyber
domain is straightforward, because computers can easily record every instance
when a particular piece of data was read and by whom (username, source IP
address). Defenders can know the very instant that a deceptive file posted on a
web server has been downloaded or a connection is made to a deceptive service.
They can also take steps to instrument some file types by embedding active
content, so they’ll know immediately when files are opened and read.
Assessing the effectiveness of a cyber
deception requires a bit more cunning. Defenders will need to use some imagination
to devise measures of effectiveness, since they usually won’t be able to
directly observe the activities of cyber adversaries. Consider the “Do” that is
desired of the attacker. Deceptions should be structured to compel an attacker
to do something whenever possible to
avoid placing defenders in the awkward position of needing to observe activity not occurring. But if the deception
causes an attacker to act, defenders can use security monitoring tools deployed
on key network segments and hosts to detect it.
Defenders can also look for other, more
subtle measures of effectiveness. A cyber canary, like the canaries used by
miners to warn them of potentially deadly fumes, can serve as a useful measure
of effectiveness when integrated with a deception plan. Cyber canaries are
similar to honeypots in their ability to signal defenders when an attacker has
performed some unambiguously malicious activity, but they’re usually not
interactive like a honeypot. For example, network administrators could add
canary accounts to their domain to signal malicious activity. Ideally, these
accounts would appear privileged (reintroduce the “admin” username) but would
not subject the organization to risk if compromised. Since the canary accounts
don’t belong to an actual user, any login attempts for one of these accounts should
instantly generate an alert. Canary accounts could easily be integrated with a
deception by associating them with fake personas, stashing them (in easily
decrypted form) on honeypot systems, or even selectively leaking them to
would-be social engineers.
Integrate deception with other defenses
Deception in the cyber domain, while
powerful, is unlikely to affect adversaries as strongly as deception in the
physical domain and therefore requires integration with other defensive
measures. In the physical domain, for instance, deception can create a strong deterrent
effect on its own. During the Cold War, the Soviet Union engaged in a massive
strategic deception against the United States. This effort, known as the Shelepin
Plan, spread disinformation about Soviet military capabilities on a massive
scale in an effort to prevent military confrontation with the West. Notably, Shelepin
contributed to the late 1950’s “missile gap” wherein the United States
overestimated the Soviets’ number of available ballistic missiles by orders of magnitude.  In truth, the Soviets had a mere four missiles
capable of reaching the United States. 
In the cyber domain, deterrence is more
difficult, due to the difficulty in attributing attacks and legal issues related
to retaliation. Thus, cyber defenders must nest deception plans within a larger
overall concept of cyber defense rather than attempt to deploy deception on its
own. Besides supporting the security of the organization, integration has two
additional benefits. First, integration ensures that the deception doesn’t try
to tell a story inconsistent with the other characteristics of the organization
that the adversary can observe. Soviet forces during the Second World War
routinely attempted to employ decoy radio traffic to deceive the German forces
opposing them.  These deceptions continually failed due to the Germans’
ability to discount what they heard by cross-referencing the traffic with other
intelligence sources such as aerial reconnaissance. The Soviets failed to
integrate what they did with what
they said, dooming their deception
Integration also ensures that defenders
have considered and emplaced additional countermeasures to detect and eradicate
intruders in the event that their deception activities don’t succeed. This is
an area where deception planning can actually support the development of the
larger defensive plan. By considering the various options available to an
attacker after being exposed to a deception story, defenders can identify adversary
courses of action that are likely to bypass existing security monitoring
mechanisms and shift defensive resources appropriately.
Defenders must consider that adversaries
may themselves attempt to execute their own deceptions against the
organization. In fact, the effectiveness of social engineering tactics in
circumventing cyber defenses strongly supports the assertion that deception
will likely constitute an attacker’s main effort during the early stages of
practically every computer intrusion. Notorious hacker Kevin Mitnick describes
a number of such attacks in his book Ghost
in the Wires. In one anecdote, Mitnick claims to have compromised the
United States Social Security Administration (SSA) “through an elaborate social
engineering attack.”  He goes on to detail how he aggregated a number of
pieces of publicly available information to learn enough about the SSA’s
organization, hierarchy, procedures, and jargon to convince an unwitting
employee that he was with the SSA’s inspector general’s office and needed her
help with a number of “investigations.” With this ruse, Mitnick was able to
gain regular access to sensitive personal information on virtually any American
including addresses, social security numbers, and income histories. He went on
to leverage this illicit access to successfully identify an undercover federal
agent who was investigating him.
might maintain a high level of suspicious vigilance when it comes to
unsolicited communications from outsiders, but most other members of the
organization do not. The best defense against deception targeted at defenders
is to limit the amount of (truthful) sensitive information about the
organization that is available to outsiders. This disrupts the ability of
attackers to create effective indicators that support their own coherent and
believable deception story.
Deception provides cyber defenders with a
key capability to control adversary behavior in order to enhance enterprise
security. To achieve this, deceptions must be carefully planned and integrated
with the organization’s overall concept of cyber defense. By planning each
deception from an adversary perspective, employing the See, Think, Do
methodology, and instrumenting the deception with measures of performance and
effectiveness, defenders can ensure that their cyber deceptions succeed.
In the near future, deploying deception
technology will become easier than ever as emerging commercial products promise to simplify the creation and management
of elaborate decoys spread across the network. By employing deception
technologies offered by an increasing array of cybersecurity firms (see Table 3), defenders can centrally control complex networks of indicators, deceptive services, honeypots and honey nets while simultaneously integrating with security monitoring and incident response technologies. Technology alone won't create solutions, however. Cyber deception also requires cunning, careful analysis, and deliberate planning in order to realize its full potential. The principles and techniques discussed herein are a good place to start.
Free / Open Source Cyber Deception Tools – Table 3
|Active Defense Harbinger Distribution (ADHD)||-Based on Ubuntu LTS|
-Contains numerous tools for Active Defense-Functionality to produce “bugged” files that beacon when opened
-Contains numerous penetration testing tools
-Contains numerous social engineering tools
|HoneyDrive||-Virtual appliance based on Xubuntu LTS Desktop|
-Kippo SSH honeypot, plus Kippo-Graph, Kippo-Malware, Kippo2MySQL and other helpful scripts
-Dionaea malware honeypot, plus DionaeaFR and other helpful scripts
-Amun malware honeypot, plus helpful scripts
-Glastopf web honeypot, along with Wordpot WordPress honeypot
-Conpot SCADA/ICS honeypot
-Honeyd low-interaction honeypot, plus Honeyd2MySQL, Honeyd-Viz and other helpful scripts
-LaBrea sticky honeypot, Tiny Honeypot, IIS Emulator and INetSim
-Thug and PhoneyC honeyclients for client-side attacks analysis, along with Maltrieve malware collector
|Artillery||-Provides honeypot functionality by spawning multiple open ports on a system |
-Monitors for file system changes
-Monitors for brute force attacks
-Can be configured to automatically block IP addresses for systems that connect
-Integrates with threat intelligence feeds to track known malicious IP addresses automatically
|Honeybadger||-Geolocates attackers based on IP address |
-Provides shell and file system emulation- attackers can traverse a fake directory structure
-Popular- attackers might be able to identify a Kippo instance as a honeypot
|Conpot||-Industrial Control System (ICS) honeypot|
-Simulates a range of ICS protocols
|Dionaea ||-Honeypot designed to trap and store malware samples |
-Could potentially help identify zero-day vulnerabilities
Adam Tyra is a cybersecurity professional with expertise in security operations, security software development, and mobile device security. He is currently employed as a cybersecurity consultant. Adam served in the U.S. Army and continues to serve part-time as an Army reservist. He is an active member of the Military Cyber Professionals Association and is a former president of the San Antonio, Texas chapter.
 "Battle of Hastings." Wikipedia. Accessed June 2016.
 Sheffy, Yigal. "Overcoming Strategic Weakness: The Egyptian
Deception and the Yom Kippur War." Intelligence and National Security,
January 24, 2007.
 Holt, Thaddeus. The Deceivers: Allied Military Deception in the
Second World War. New York: Scribner, 2004.
 Joint Publication 3-15: Barriers, Obstacles, and Mine Warfare for
Joint Operations. Washington, D.C.: Joint Chiefs of Staff, 2011.
 Witlin, Lauren. "Mirror Imaging and Its Dangers." SAIS
Review of International Affairs, 2008.
 Colgan, Bill. Allied Strafing in World War II: A Cockpit View of Air
to Ground Battle. Jefferson, NC: McFarland &, 2010.
 Joint Publication 3-13.4: Military Deception. Washington, D.C.:
Joint Chiefs of Staff, 2012.
 ADRP 1-02: Operational Terms and Graphics. Washington, D.C.:
Department of the Army, 2015.
 FM 90-2: Battlefield Deception. Washington, D.C.:
Department of the Army, 1988
 Sockpuppet (Internet). Wikipedia. Accessed June 2016.
 FM 90-2: Battlefield Deception. Washington, D.C.: Department
of the Army, 1988
 Breitenbach, Daniel. Operation Desert Deception: Operational
Deception in the Ground Campaign. Newport, RI: Naval War College, 1991.
 Stoll, Clifford. The Cuckoo's Egg: Tracking a Spy through the Maze
of Computer Espionage. New York: Doubleday, 1989.
 FM 90-2: Battlefield Deception. Washington, D.C.:
Department of the Army, 1988
 Townsend, Robert E. Deception and Irony: Soviet Arms and Arms
Control. 2-3 ed. Vol. 14. American Intelligence Journal, 1993.
 Day, Dwayne. "Of Myths and Missiles: The Truth about John F.
Kennedy and the Missile Gap." The Space Review.
 FM 90-2: Battlefield Deception. Washington, D.C.:
Department of the Army, 1988
 Mitnick, Kevin D., and William L. Simon. Ghost in the Wires: My
Adventures as the World's Most Wanted Hacker. New York: Little, Brown and