The Magazine of the MCPA

Click here to be published and contribute to the professional dialogue!

Full Text Listing of All Stories

Leader's Guide to Protecting Cyber and Operational Security

posted Feb 26, 2018, 7:32 PM by James Caroland   [ updated Mar 2, 2018, 10:08 AM ]

By Major Michael Senft, U.S. Army

Cyberspace is a contested domain of warfighting and information technology. Capable and intelligent adversaries, namely state and non-state actors, seek to asymmetrically disrupt U.S. advantages in communications by targeting the weakest link in our technical and human defenses1,2. A single weak security practice can result in the widespread compromise of a network or information system, endangering not only the lives of U.S. military and civilian personnel, but also business viability3,4. The purpose of this guide is to provide leaders with a concise outline of significant Cybersecurity and Operational Security (OPSEC) concerns with recommendations to protect network dependent warfighting and other essential functions including mission command, fires, intelligence, and sustainment. Cybersecurity and OPSEC are processes that should be incorporated into all phases of operations to protect people, equipment, and ensure mission success3. This guide will cover three topics: Cybersecurity concerns, OPSEC concerns, and recommendations.  Let’s address the foundational cybersecurity concerns first.

Cybersecurity Concerns:

Rob Joyce, the former Chief of the National Security Agency’s Tailored Access Operations and current White House Cybersecurity Coordinator, succinctly captured this concern by stating, "If you really want to protect your network, you really have to know your network"5,6. Knowing the network is essential to defending your key cyber terrain. Leaders must consider that:

  • Every device that emits a signal or has a processor is a potential vulnerability4
  • Three primary attack vectors within your formations3,4,5,7:
  • Email – Spear phishing emails can fool even experienced security professionals6,8,9
  • Removable media –  Adversaries use removable media to gain access to systems9
  • Websites – Adversaries compromise trusted websites to precisely target specific user groups6,10

The second concern is the threat posed by privilege escalation and lateral movement. Leaders must identify, monitor, and protect high-value assets within their organizations by considering the following11:
  • Mission critical data, systems and networks12
  • Network and system configuration, security, and monitoring systems13
  • Users with elevated privileges (e.g., network and system administrators, users with removable media writing or cross-domain data transfer rights, etc.)14

OPSEC Concerns:

The first thing leaders should understand is that large enterprise networks including the Non-classified Internet Protocol Router Network (NIPRNET) are not secure. Sensitive but Unclassified (SBU) information should be encrypted prior to transmission via email as communications can be targeted for interception and exploitation at any time16. Likewise, SBU data stored on mobile computing devices (data-at-rest) should be encrypted to prevent compromise in the event of loss or theft of these devices [9]. SBU data includes, but is not limited to:
  • Network Configuration Files, Network Architecture Diagrams, and Network Vulnerability Reports15
  • Password and System Credential Files15
  • Personally Identifiable Information (PII)15
  • Very Important Person (VIP) Travel15
  • Locations, movements and mission planning of essential elements15
Secondly, leaders must understand and mitigate operational vulnerabilities created by cell phones4,16,17
  • Cell phones are prime targets for enemy Signals Intelligence (SIGINT) and Electronic Intelligence (ELINT) even when used in a disciplined manner4
  • Compromised smart phone applications can provide adversaries with geo-location and other valuable intelligence16
  • Loss or theft of cellphones and other mobile devices can provide an avenue of attack for adversaries to gain access to enterprise networks or provide access to sensitive information
Third, leaders must understand and mitigate operation vulnerabilities created by insider threats18
  • Insider threats abuse their authorized access to information and information systems to execute theft, espionage, fraud and sabotage18
  • Unintentional insider threats may unknowingly aid adversaries to gain access to systems or exfiltrate data18
Finally, leaders should gain increased understanding of and seek to mitigate the vulnerabilities introduced by the use of social media, social engineering, and PII. 
  • Adversaries use social media to gather intelligence and target Service Members, their families and others4,16,17,19 
  • Social engineering is a highly effective and low-cost attack vector used by threat actors to bypass the most effective defenses to compromise systems and gain access to sensitive information20
  • Awareness and training are the most effective countermeasures20
  • Adversaries target PII to exploit financial and other personal interests of Service Members, their families and others4


To counter the dual concerns of cybersecurity and OPSEC, leaders should foremost train their people, but also implement and enforce best practices. For cybersecurity concerns, the following recommendations will strengthen your ability to know your network and defend against the insider threat:

Protect Credentials 
  • Implement the Principle of Least Privilege to limit account rights to the minimum required by the user5,6,7
  • Log and monitor privileged user activity and the use of administrative tools6,7,12
  • Enforce password management since default, weak, or stolen passwords enable adversaries to gain access to and elevate privileges12,21

Defend Against the Insider Threat 

  • Know your Service Members and employees
  • Know the behavioral indicators of malicious threat activity18
  • Employ security technology, including multifactor authentication, to detect and prevent insider attacks12,18

Even the most secure network can be compromised, thus it is essential to harden the network and introduce resiliency [4].
  • Disable unnecessary services.  Unnecessary services provide potential avenues of attack for adversaries6,7,21
  • Disable use of insecure protocols (FTP, SNMPv1, Telnet, etc.).  Insecure protocols transmit user names and passwords in the clear.
  • Identify systems that are not patched on a continuous basis and apply other risk mitigations such as traffic filtering and network segmentation to reduce the attack surface.  Program of Record systems are an example as they typically receive software updates and patches on a quarterly basis21
  • Prevent unauthorized devices from connecting to the network [6,14].  Unauthorized devices provide avenue of attack for adversaries to gain access to systems or exfiltrate data6,14
  • Restrict physical access to network devices and infrastructure to the greatest extent possible [23].  Physical access enables a skilled adversary to quickly bypass technical security measures to gain full control of systems23
  • Develop Continuity of Operations Plans to operate despite degraded or disrupted communications.1,11  Ensure communications Primary, Alternate, Contingency, and Emergency (PACE) plans enable mission command even in the event unclassified and/or one or more classified networks are compromised or disrupted.4,11
As outlined in this guide, a single weak security practice can result in the widespread compromise of a network or information system. Protecting network dependent warfighting and other essential functions requires incorporating cybersecurity and OPSEC into all phases of operations. Like good OPSEC, effective cybersecurity requires the development and promotion of an organizational culture that is cyber risk and adversary threat aware, and emphasizes and enforces standards and practices that minimize vulnerabilities to Department of Defense and corporate networks, systems, and information.3

About the Author

Major Michael Senft is a Functional Area 26A Information Network Engineering Officer and has multiple deployments in support of Joint and Special Operations units. He holds a Master's Degree in Computer Science from the Naval Postgraduate School and a Master's Degree in Engineering Management from Washington State University.

End Notes

1. U.S. Department of the Army. (2014). The Army Operating Concept, Win in a Complex World. TRADOC Pamphlet 525-3-1. Retrieved from

2. U.S. Army Asymmetric Warfare Group. (2016) Russian New Generation Warfare Handbook. Retrieved from (CAC Login Required).

3. U.S. Army Chief Information Office/G-6. (2015). Leaders Information Assurance/Cybersecurity Handbook. Retrieved from

4. R. Leonhard, (2016). The Defense of Battle Position Duffer – Cyber Enabled Maneuver in Multi-Domain Battle. Retrieved from (CAC Login Required).

5. R. Joyce, (2016). Disrupting Nation State Hackers. USENIX 2016 Presentation. Retrieved from

6. Center for Internet Security. (2016). Critical Security Controls for Effective Cyber Defense. Retrieved from

7. National Security Agency. (2015). NSA Methodology for Adversary Obstruction. Retrieved from

8. FireEye. (2016). Spear-Phishing Attacks - Why They are Successful and How to Stop Them. Retrieved from

9. Defense Security Service (n.d.) Common Cyber Threats: Indicators and Countermeasures. Retrieved from

10. FireEye. (2015). Zero-Day Danger. Retrieved from

11. BG P. Frost and  M. Hutchison, (2015). Top 10 Questions for Commanders to Ask About Cybersecurity. Retrieved


12. Verizon. (2016). 2016 Data Breach Investigations Report. Retrieved from

13. P. Stone,and A. Chapman, (2015). WSUSpect – Compromising the Windows Enterprise via Windows Update. Retrieved from

14. U.S. Department of the Navy (2014). Commander’s Cyber Security and Information Assurance Handbook. COMNAVCYBERFORINST 5239.2A. Retrieved from

15. National Security Agency. (2016). JCMA Findings and Trends – 2016 Information Assurance Symposium. Retrieved from

16. CrowdStrike. (2016). Use of Fancy Bear Android Malware in Tracking of Ukrainian Field Artillery Units. Retrieved from

17. U.S. Computer Emergency Readiness Team. (2011). Cyber Threats to Mobile Phones. Retrieved from

18. National Cybersecurity and Communications Integration Center. (2014). Combating the Insider Threat. Retrieved from

19. Wired. (2017). Meet Mia Ash, the Fake Woman Iranian Hackers Used to Lure Victims. Retrieved from

20. U.S. Department of State Overseas Security Advisory Council. (2015). Social Engineering: Threats and Best Practices. Retrieved from

21. U.S. Army Cyber Center of Excellence. (2016). Cyberspace Operations Bulletin 16-13. Retrieved from (CAC Login Required)

22. US Army Communications-Electronics Command (CECOM) Software Engineering Center. (2014). Software Engineering Center Productions and Services Catalog. Retrieved from

23. D. Ollam, (2008). Ten Things Everyone Should Know About Lockpicking & Physical Security. Retrieved from

Image credits (in order of appearance):  Pixabay, Moody Air Force Base, U.S. Department of Defense

Meltdown and Spectre: Analyzing the Lasting Costs

posted Jan 19, 2018, 1:33 PM by James Caroland   [ updated Jan 19, 2018, 1:39 PM ]

By James Loving


Earlier this month, researchers with Google’s vulnerability research group Project Zero released a pair of exploits, Meltdown and Spectre, that threaten nearly every workstation and server on the Internet. These kernel attacks exploit performance features to violate process isolation measures, allowing an adversary to read private information, such as passwords. Patches are being rapidly developed and distributed, but the mitigations typically result in performance hits ranging up to 20%. Because of the long-term performance impact, these vulnerabilities may become the most financially costly in recent memory, despite the rapid patching preventing significant harm to systems’ security.

Overview of Vulnerabilities

Meltdown (CVE-2017-5754)

The Meltdown attack exploits speculative exhaustion, which is a performance feature where a CPU will begin simultaneous execution of multiple instructions, so that when an instruction is reached, execution has already begun, increasing apparent performance. Specifically, the CPU will begin execution of multiple lines of code, assuming that any out-of-order instructions can be undone. Unfortunately, these out-of-order instructions are able to avoid the measures that prevent code from accessing kernel memory, because these privilege checks have a significant impact on performance and are thus executed only on the “correct” path of code execution.

This vulnerability primarily affects Intel processors. While the Meltdown paper presents some evidence that the out-of-order exploitation of speculative execution is possible on AMD and ARM chipsets, the researchers were unable to produce a proof-of-concept password leaker for these architectures. ARM has confirmed that this vulnerability impacts only their Cortex-A75 line; their other processors are not vulnerable.This processor is used by some Snapdragon chipsets, which are common in mobile devices.

For more information on Meltdown, I recommend this developer’s analysis, the researchers’ original blog post, and their paper.


Project Zero’s original blog post outlined two variants on the Spectre attack; the Spectre paper lumps both variants together and provides five additional variants. This article discusses only the two primary variants included in the blog post; for details on the additional variants, consult pages 10-12 of the paper.

Variant 1 (CVE-2017-5753)

Spectre variant 1 also exploits speculative execution, but it relies on branch prediction, wherein the CPU will begin execution of multiple branches of a conditional statement, concurrent to the condition being evaluated. Therefore, when the condition is evaluated and the branch-to-be-executed has been determined, execution is already in progress. The CPU assumes that any instructions from unchosen branches can be undone; the Spectre attack exploits this assumption. To execute the attack, the adversary chooses a “gadget,” an instruction from the victim’s address space, which is then speculatively executed. When the CPU attempts to undo the effects of this unchosen branch, changes to the memory cache are not undone. Thus, the adversary can force leakage of sensitive data in other processes’ memory.

Below is pseudocode example of the Spectre variant 1 vulnerability.  Due to compiler optimizations, this snippet does not present a real-life instance of speculative execution.

This vulnerability has a much wider impact than Meltdown; it affects Intel, AMD, and ARM architectures. All three manufacturers have acknowledged the vulnerability and are working with OS developers to provide mitigation instructions and patches for their hardware. Snapdragon has not commented on this variant’s impact to its chipsets, but both Android and Apple iOS have acknowledged their vulnerability and issued patches.

Variant 2 (CVE-2017-5715)

Like variant 1, Spectre variant 2 exploits branch prediction speculative execution, but variant 2 has an adversary process influencing the branch prediction of a victim process and redirecting it, thus forcing the victim process to speculatively execute a “gadget” - code located elsewhere in memory that can be used to leak sensitive information. This variant affects the same platforms as variant 1, although it may be more difficult to execute against certain chipsets.


Because of the unprecedented range of vulnerable devices, the Spectre and Meltdown attacks have a significant impact. Cloud vendors, and organizations that rely on cloud services, are likely to bear the majority of the cost. However, due to the pace at which cloud services are patching their servers, they are likely to face a primarily monetary cost as a result of these vulnerabilities, instead of a legitimate cybersecurity threat. In comparison, the Internet of Things (IoT) is likely to face legitimate cybersecurity challenges due to these vulnerabilities, as IoT devices are notoriously difficult to patch and/or infeasible to replace.


Compared to other groups, individual computer users are likely the least affected. The industry has responded quickly to the attacks. Microsoft, Apple, and Red Hat have published patches addressing the vulnerabilities, and Ubuntu and FreeBSD are currently developing patches, as of the writing of this article. While these patches generally have significant performance hits, most user tasks, such as Internet browsing, word processing, video streaming, etc., are not computationally intensive; the noticed impact of the patch-reduced performance should be little.

The Cloud

Because most cloud services run on virtual servers (versus bare metal), Meltdown and Spectre have massive potential impact. A particular customer/server can take all necessary precautions and still be victimized by another, insecure customer/server, as the exploited server and the victim server may exist on the same hardware, thus allowing for memory access between servers. Fortunately, the industry responded quickly: Amazon Web Services (AWS), Google Cloud, Microsoft Azure, Digital Ocean, and other companies have already begun patching their systems.

While the majority of press surrounding the impact on the cloud of Meltdown and Spectre has centered around these cybersecurity issues, I believe the long term performance reduction is a greater impact. Unlike users, who often have computer resources to spare, modern cloud systems are designed to run at extremely high loads. The customer is typically paying for the infrastructure, and therefore wants to get their money’s worth. Because of the reduced performance resulting from the necessary patches, these customers may find their cloud-associated costs rising to compensate for the reduced performance: each server being, e.g., 10% less effective means that 11% more servers are necessary for equivalent computational capability.

The Internet of Things

However, the largest impact of Meltdown and Spectre may be to the security of the Internet of Things. IoT devices are often deployed and not administered again, so many such devices will likely never receive the patches necessary to protect against Meltdown/Spectre. In the long term, this vulnerability may persist, as traditional devices - servers, workstations - are replaced with new hardware, which will hopefully be resistant to Meltdown/Spectre, and the IoT devices remain in place.

Fortunately, I have little reason to believe that these exploits will be readily weaponized for the Internet of Things. All current proofs of concept are used for information theft; they execute fragments of code already present on the victim device to leak memory, thus accessing private information. I have not seen an example of remote code execution, which would be necessary for the majority of malicious activity in the IoT. While an adversary may be able to, for example, use Spectre on an ARM-based smart light bulb to steal a WiFi password, they should be unable to use these exploits to create a botnet.


The Meltdown and Spectre attacks may, in the long term, be extremely costly: they could increase the costs of cloud computing by up to 20%. However, due to the rapidity with which patches are being developed and deployed, combined with the narrow use of the exploits - leaking private information while already on a device - they do not appear to pose a significant threat to the security of the Internet or its users.

About the Author

James Loving is a security engineer, research affiliate with the Massachusetts Institute of Technology (MIT) Internet Policy Research Initiative, and copy editor of Cyber magazine. His research interests include security and privacy in the Internet of Things and the intersection of Internet and international security. He holds BS degrees from Florida State University in Computer Criminology and International Affairs and MS degrees from MIT in Computer Science and the Technology and Policy Program. He currently serves as an officer in the Massachusetts Army National Guard.

Image credits:  all images from

Every Marine is a Cyber Marine Too: The Four Internet Safety Rules

posted Jan 17, 2018, 6:13 PM by James Caroland   [ updated Jan 17, 2018, 6:14 PM ]

By LtCol John Dobrydney, USMC, CISSP

An oft repeated phrase, "The network would be secure if it weren’t for the users” is a wishful thought of many cyber security professionals.  Repeated mistakes such as plugging in unauthorized USB drives, opening unsolicited emails and thoughtlessly clicking on the embedded malicious links, and uploading publicly viewable personally identifiable information provide never-ending job security for security professionals.  They also provide never-ending security risks for the security professionals who are charged with protecting the information and information systems.  What seems common sense to those whose life’s work revolves around all things cyber security, most users have little to no understanding of computer networks, application development, social engineering, or malware.  These concepts are not their areas of specialization, so it is unrealistic to assume common users have the requisite knowledge to recognize the wide variety of threats facing them.  Therefore, users need a few, easy to remember, yet inclusive, “safety rules” to promote Internet safety and protect user’s information and information systems. 

The United States Marine Corps recognized a similar situation.  By training and necessity, every Marine is considered to be a “Rifleman”  capable of employing a rifle if called upon, but there are varying levels of weapons handling proficiency across the Marine Corps.  Combat arms Marines are expected to have the highest levels while Marines in other occupational specialties who do not regularly exercise their weapons handling skills will have less.  Regardless, a Marine is expected to employ any weapon in a safe and professional manner, to shoot only valid targets, and exercise discipline to ensure the weapon is operated in safe manner.  Human nature does intervene and inevitably, “things happen”.  Marines do fire weapons in the wrong place, at the wrong time, or at the wrong target.  Commonly referred to as a “negligent discharge” such acts are punishable under the Uniform Code of Military Justice1.   The Marine Corps took action to prevent negligent discharges and developed the “Four Weapons Handling Safety Rules ” that every Marine knows by heart.  The Four Safety Rules are:

    1. Treat every weapon as if it were loaded.
    2. Never point a weapon at anything you do not intend to shoot.
    3. Keep your finger straight and off the trigger until you are ready to fire.
                                                                4. Keep the weapon on SAFE until you intend to fire.2 

The Four Safety Rules will guide development of “Four Internet Safety Rules” for everyday Internet users’ application to protect their information and information systems.  The First Weapons Handling Safety Rule, “Treat every weapon as if it were loaded” charges the Marine to maintain a proper mindset when handling and using a weapon.  Likewise, computer users must maintain a proper mindset when using computer devices, whether a personal desktop, mobile device, or work computer.  This leads to the:

First Internet Safety RuleTreat your device and information as if it were constantly under threat.

Users must be mindful that their device and the information on the device is under a varying level of threat every time the device connects to the Internet.  Threat types and levels vary per user; therefore, users need to periodically review the threats affecting their environment and then plan appropriately.   Maintaining a proper defensive mindset will guide each decision the user makes regarding security settings, choice of passwords, timely operating system and application software updates, and use of anti-virus and anti-malware protection.  Protecting information also means developing and executing an appropriate data backup plan and then verifying that the plan works as designed.  Maintaining a defensive mindset on a network requires timely reporting if anything seems amiss.  A noted problem on a network can affect other users on the same network.

The Second Weapons Handling Safety Rule, “Never point a weapon at anything you do not intend to shoot” reminds Marines to be mindful of the damage a weapon could cause if a projectile struck an unintended object.  Similarly, users must be mindful of where they “point” their browsers , what emails they open, what they download, and the links they click.  This mindfulness leads to the:

Second Internet Safety Rule:  Do not access websites, download applications, or open email with which you are not familiar.

Accessing unfamiliar or questionable websites, downloading non-authentic applications, or opening spam email can lead to installing unwanted malware on an unknowing user’s device.  The unintended consequences resulting from the malware can lead to information theft or corruption, slower operating devices, and even complete information loss via ransomware.  Constant awareness of where a device “points” is a necessary condition to avoiding the resultant damage.

The Third Weapons Handling Safety Rule, “Keep your finger straight and off the trigger until you are ready to fire” makes use of external safety measures to ensure that a weapon can fire only if a Marine engages a key external component, in this case, the trigger finger.  In this case, it is the absence of an external component that provides a measure of safety.  The weapon will not function properly unless this component is added in the course of normal operations.  In the case of cyber security, it is the addition of external safety measures that provide extra measures of security.  Users have an ability to make use of external safety measures appropriate to their use to maintain and increase their security level.  These measures lead to the:
Third Internet Safety Rule:  Keep all applications, firmware, middleware, operating systems, and anti-malware program software patched  and up to date. 

Using external application, operating system, and anti-malware update servers to patch and detect known vulnerabilities will help users reduce the number of vulnerabilities on their devices and make them harder to exploit.  Hardened targets will dissuade all but the most dedicated attackers and cause them to search for easier targets.  A safe course of action is for users to learn how to enable their auto-update settings.  Auto-update will ensure patching occurs on a regular basis; however, it is smart practice to periodically check and ensure that auto-update functions correctly.   

The Fourth Weapons Handling Safety Rule, “Keep the weapon on SAFE until you intend to fire” makes use of internal safety measures and defenses purposefully designed into the device.  A service rifle has a built-in safety feature that prevents a weapon from firing even if the Marine squeezes the trigger.  This built-in safety feature aids in preventing negligent discharges and the Marine should disable this feature and select FIRE only when ready to employ the weapon.  At all other times the weapon should be on SAFE.  Likewise, users need to make use of internal operating system settings, security application controls, and ancillary peripherals to the maximum extent practical to reduce risk to personal information and information systems.  Lack of knowledge or inexperience is a common reason why internal controls are not used properly or to the fullest extent.  For example, users commonly deploy wireless routers “out of the box” with easy to find default configurations and passwords, post to social media sites that don’t have proper security settings checked, and place misconfigured servers online.  These common mistakes lead to the:

Fourth Internet Safety Rule:  Know and use maximum level security settings to keep online personal information as safe as possible.

Applying Moore’s Law , phones, tablets, laptop and desktop computers, applications, and networking devices will be smaller, faster, more complex, and more capable.  The old joke used to be about how hard it was to program a VCR.  Today, a mobile device can easily overwhelm a novice user in terms of privacy settings, default configurations, and what is considered a trusted application.  Fortunately, the same Internet that poses danger at every turn also provides help in the form of Google searches and YouTube instructional videos.  Users can search for “how-do-I-…?” instructions and can receive a wealth of content in return.  Blogs, manufacturer websites, communities of interest, and videos all provide tips, tricks of the trade, and more, but users must beware of illegitimate sites.  Surfing to reputable sites is best, starting with the manufacturer and branching out from there.  Trusted friends, co-workers, or the “IT guy” are good sources too.  Above all, users need to ask if unsure.

Thus, the Four Internet Safety Rules are:
1. Treat your device and information as if it were constantly under threat.
2. Do not access websites, download applications, or open email with which you are not familiar.
3. Keep all applications, firmware, middleware, operating systems, and anti-malware program software patched and up to date.
4. Know and use maximum level security settings to keep online personal information as safe as possible.

Prior to every Marine Corps live-fire exercise, the Officer-in-Charge or Range Safety Officer conducts a safety brief for every participating shooter.  Without a doubt, the Four Weapons Handling Safety Rules are discussed and each shooter will restate each Safety Rule.  Since the Rules are ingrained in every Marine, each shooter can easily rattle them off.  Generally, the briefer will discuss each Rule and ensure that the weapons handling knowledge is front and center in each shooter’s mind prior to starting the exercise.  So it should be with the Four Internet Safety Rules.  How each office, shop, unit, or organization chooses to indoctrinate and reiterate the Four Internet Safety Rules is a matter of analysis, decision, and execution.  Publishing policy that addresses each Rule, user expectations and consequences, and, most importantly, why the Four Internet Safety Rules are important, is the best place to start.  Publicly addressing and “selling” the policy and Four Internet Safety Rules provides the leadership the opportunity to look users in the eye and reinforce the need for Internet safety and each user’s role in ensuring that safety for the entire organization.  Leaders who take the opportunity to remind users of the importance of Internet safety and use novel discussion methods will eventually drive the point home, if only to remind users that when they see the leadership walking about Internet safety will come to mind.  Posting the Four Internet Safety Rules on websites, in break rooms, on log-on banners, and on pop-ups will reinforce the message.  Developing a very public award system for users or departments that go the longest without cybersecurity incidents introduces the natural competitive spirit and peer pressure to reduce incidents.  Likewise, developing and discussing use cases of users reported by news agencies who suffered the consequences of not following the Rules will add a needed dose of “It really can happen to you” to any instruction or discussion period.  Regardless of how well users accept the Four Internet Safety Rules presented in this article, users do need a few, simple, and general rules to guide their Internet use and remain reasonably secure in the course of their online activities. 

About the Author

A Marine Communications Officer, Lieutenant Colonel John Dobrydney is an experienced cybersecurity and network operations planner. He recently served as the Commanding Officer of Marine Wing Communications Squadron – 18, the Executive Officer of 7th Communication Battalion, the Network Operations Officer for the III MEF G6, and served as the Enterprise Information Assurance Branch Head at Headquarters, Marine Corps C4 Directorate. He currently serves as the Cybersecurity Division Chief, Joint Staff J6. Lieutenant Colonel Dobrydney has a Masters of Security Studies from the Marine Corps War College and a Master of Science in IT Management from the Naval Postgraduate School.

1UCMJ art. 134 (2012).
2U.S. Marine Corps. (2012). Rifle marksmanship REVISED (MCRP 3-01A). Albany, GA:

Image credits (in order of appearance):,,

[Book Review] On Cyber: Towards an Operational Art for Cyber Conflict

posted Dec 28, 2017, 3:07 PM by James Caroland   [ updated Dec 28, 2017, 3:13 PM ]

By James Caroland, Editor-in-Chief, Cyber Magazine

On War is one of the preeminent books on military strategy and war and is practically required reading for any officer in the United States military, as well as many international militaries.  You will find it in curriculums at Service and National War Colleges where warfare is studied, and any officer who has completed their Joint Professional Military Education (JPME) probably has a copy on their bookshelf.  I can see my tattered, dog-eared copy from across my office as I write this.   On War was written by Prussian General Carl von Clausewitz … in the early 1800s … long before the Advanced Research Projects Agency (ARPA) invented what came to be known as the Internet.   

Fast forward over a century and a half to encounter ubiquitous cyberspace, its designation as the fifth domain of warfare, and the creation of military commands for cyber warfare. Cyber conflict has already been happening for over a decade. While there have been many articles written about various aspects of cyber conflict over that time, there has not been a comprehensive book that addresses how militaries defend, fight, and win in cyberspace – until now.

With a title that I assume is a nod to Clausewitz’s On War, Gregory Conti and David Raymond have written On Cyber: Towards an Operational Art for Cyber Conflict, updating Clausewitz’s teachings (among others) to account for the advent of cyberspace.  With cyber conflict being primarily -- though not exclusively -- a military realm, the foundation of the book is a lot of military doctrine.  However, Conti and Raymond are able to take the language of warfighting and make it accessible to a non-military audience. They logically organize the book by the various elements of combat (terrain, maneuver, intelligence, command and control, etc.); address each of these across the strategic, operational, and tactical levels of war; and explain how each relates to and can be leveraged in cyber conflict.

Conti and Raymond creatively balance quoting historical military strategists (e.g., Clausewitz, Jomini, Sun Tzu, Napolean, Patton) and discussions of traditional warfighting (e.g., Battle of Marathon (490 BC), Civil War, World Wars I/II) with quoting modern day cybersecurity professionals (e.g., Dan Kaminsky, the grugq, Whitfield Diffie, Dan Geer) and discussions of recent cyber events. There are sometimes two camps in the cyber versus kinetic warfighting debate – those that say cyber can be applied to any kinetic warfighting concept and those that say cyber is completely unique.  Conti and Raymond caution either camp in dismissing the other and do a credible job presenting how traditional kinetic warfighting may or may not apply in cyber conflict, providing concrete examples, illustrative graphs and tables, and well-researched points to make their cases and recommendations.

The book is certainly not without its references to geek culture, which is appreciated by many of us in the cyber community. These include books, movies, comics, and television shows such as Star Trek, The Matrix, Harry Potter, Ender’s Game, Robocop, The Terminator, and X-Men, as well as nerdcore music lyrics. Although, I was a little sad to see Star Wars did not make the cut. These references are not simply randomly inserted, but deliberately used to elucidate some facet of cyber conflict in a clever way.

Other than hoping for a Star Wars allusion, as a Naval officer, I personally was hoping for more Navy context (along with the other military services).  Although there is mention of the Phalanx weapons system found on Navy ships, the book is rather Army-centric in its context.   This isn’t surprising as Conti and Raymond are retired Army officers with over 50 years of service combined.  At the same time, you don’t have to be in the military to understand the book.   It is extremely well-researched with 693 endnotes and has many footnotes throughout its pages explaining various concepts, both military and cyber related.

Conti and Raymond also highlight several key themes that are integrated throughout the book such as automation in cyber conflict is key, speed matters in cyberspace, attribution is hard, laws and policy can be limiting (for the good guys), modern technology influences decisions, geography is “different” in cyber conflict, and command and control is more than just humans. If for some reason you choose not to read the entire book, you can always skip to the end of each chapter which has conclusions and recommendations that effectively tie together concepts from the chapter.    

The book ends with “A Look at the Future”. While this final chapter does an excellent job covering technology on the horizon and potential ramifications for cyber conflict, it also emphasizes that cyber conflict is more than just technology. It addresses creating an agile culture, having multi-disciplinary teams, adapting cyber institutions, growing cyber talent, and updating doctrine. 

I highly recommend this educational, entertaining, and insightful book to anyone interested in cyber conflict/warfare. Military and government members, particularly senior leaders, strategists, planners, and decision-makers, should order their copy now. Much like my copy of On War, On Cyber: Towards an Operational Art for Cyber Conflict will undoubtedly become tattered and dog-eared from reference and use. 

Book Details

Authors:     Gregory Conti, David Raymond
Editor:        John Nelson
Pages:        352 (paperback)
Publisher:   Kopidion Press
Date:          July 18, 2017
ISBN-10:    0692911561
ISBN-13:    978-0692911563

Punching Above Its Weight: Estonia as a Cyber Power

posted Dec 14, 2017, 8:35 PM by James Caroland   [ updated Dec 14, 2017, 8:43 PM ]

By Michael Lenart

An Unlikely Model?

In 2007, the Baltic nation of Estonia moved a statue of a Red Army soldier from the center of its capital, Tallinn, to a military cemetery on the outskirts of town. The move reflected the ethnic Estonian majority population’s perspective that the Red Army symbolized occupation and oppression, rather than the defeat of Nazism – the intended message of the Soviet authorities who had erected the statue in 1947.1 Similarly, many of Estonia’s ethnic Russians perceived the relocation as disrespect toward a generation of patriots who had ostensibly liberated Estonia from Nazi invaders. As a result, many ethnic Russians rioted in protest, while Russian hackers launched massive Distributed Denial of Service attacks against Estonian government and private sector websites. This was the first ever large-scale cyber attack on a nation-state, and it took down the online services of banks, media outlets, and government organizations.2  Fortunately, however, the attacks were mitigated and eventually stopped. Perhaps even more fortunately, as we’ll see later on, Estonia’s decision to be publicly transparent about the attacks significantly advanced global discussion of cyber issues.

Estonia is a country of roughly 1.3 million people in a world of around 7.6 billion. Its land mass is about twice that of New Jersey3. Most people in the world probably aren’t sure where Estonia is located. From a tourism perspective, its biggest draw is the medieval Old Town section of its capital, complete with defensive walls going back as far as the thirteenth century.4

And, as previously recounted, the first cyber-related news most people ever heard about Estonia portrayed it undeniably as a victim.

These may not sound like the characteristics of a rising cyber power. However, Estonia continues to build its reputation as an innovative, IT-savvy state determined to shape the international information environment and fight effectively on the ever-evolving cyber battlefield. Described as a “high-tech hub whose engineers helped invent Skype,”5 Estonia boasts an impressive array of cyber accomplishments and initiatives. For one, the country maintains an unprecedented e-governance system. It also hosts NATO’s Cooperative Cyber Defense Center of Excellence, and it frequently kickstarts international conversations on key cyber issues.


The foundation of Estonia’s global reputation for electronic innovation is its trailblazing e-governance system. This system allows the overwhelming majority of government services to be performed online, greatly increasing efficiency and convenience. The e-estonia website’s description of the system is worth quoting at length, as it shows Estonians’ tendency to be both practical and strategic in their thinking:

"e-Governance is a strategic choice for Estonia to improve the competitiveness of the state and increase the well-being of its people, while implementing hassle free governance.

Citizens can select e-solutions from among a range of public services at a time and place convenient to them, as 99% of public services are now available to citizens as e-services. In most cases there is no need to physically attend the agency providing the service.

The efficiency of e-Government is most clearly expressed in terms of the working time ordinary people and officials save, which would otherwise be spent on bureaucracy and document handling.6"

At the center of this system is the national ID card, containing a chip with the cardholder’s embedded files. Using 2048-bit public key encryption, the card provides digital proof of identity and enables the 
holder access to a host of e-services, ranging from registering property titles to submitting court records, managing health care records and prescriptions, filing quick tax returns, registering businesses, voting, and a host of others.7 The result is that only a tiny fraction of a person’s periodic administrative tasks requires physically going somewhere or mailing paper documents. Creating this virtual ecosystem has required significant investment in network and internet infrastructure, but the investment has made Estonia “the most advanced digital society in the world.”8

Perhaps the most novel feature of Estonia’s e-governance system is that, pending a background check, it’s open to literally anyone. Any person in any country can apply to be an “e-resident” of Estonia. The advantages of this openness are found in the global marketing of Estonia’s innovative national brand, and the increased possibility that a person who, for instance, registers a new business in Estonia will end up paying for some government services there, using an Estonian bank, or partnering with other Estonian business.9

Cyber Defense Center of Excellence

Another very visible example of Estonia’s special status in cyber and electronic issues is the NATO Cooperative Cyber Defense Center of Excellence, located in Tallinn. The Center enhances the cyber expertise of NATO and its partners through education, research and development, lessons learned, and consultation.10 These efforts extend to the fields of technology, strategy, operations, and law as they apply to cyberspace.11

The Center’s most exciting activity is the Locked Shields live-fire cyber defense exercise, which has been held annually since 2010. Locked Shields challenges teams to maintain the networks and services of a fictional country by handling and reporting incidents, solving forensic challenges, and responding to various scenario injects. The 2017 Locked Shields tasked teams to maintain the services and networks of a military air base experiencing severe attacks on its electrical grid, command and control systems, unmanned aerial vehicles, critical information infrastructure components, and other operational infrastructure. The exercise featured around 800 participants from 25 countries, and deployed over 3,000 virtualized systems in the simulated fight.12

The Center also hosts CyCon, perhaps the world’s pre-eminent cyber conference. Each year, CyCon attracts hundreds of international decision-makers and experts from government, academia, and industry. Like the Center’s overall approach to cyber issues, CyCon approaches topics from a variety of perspectives, e.g., legal, technological, and strategic. CyCon 2017, for instance, focused on the following themes: How can the ‘core’ elements of cybersecurity be defined? How do they relate to the essential assets and principles in technical, legal, and political contexts? How can defenders protect critical information infrastructure? How can critical vulnerabilities be mitigated and the most serious threats countered? How can legal frameworks be established and applied to cybersecurity? What technologies can help counter emerging cyber threats? How can effective cybersecurity strategies be developed and implemented? What should the role of the armed forces be in executing these strategies? How can countries deter cyber attacks against core national assets?13

While CyCon generally occurs in the spring, the Center also co-sponsors CyConU.S. each fall with the U.S. Army Cyber Institute. CyConU.S. 2017’s overarching theme was “The Future of Cyber Conflict.” The conference explored how the increasing prominence of cyberspace’s place in everyday life combines with emerging technologies and scientific breakthroughs such as quantum computing, machine learning, Big Data, and robotics to expand the battlespace, and perhaps even redefine the concepts of war and peace.14

Last but not least, the Center also facilitated the writing and publishing of the Tallinn Manual 2.0. Written by nineteen international law experts and technically titled The Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations, this publication is an update of the original 2013 Tallinn Manual. While both versions are based on the understanding that pre-existing international law applies to cyberspace, the original manual focused only on the most extreme cyber incidents, such as those that occur during armed conflict. Tallinn 2.0 also addresses more “day-to-day” legal considerations, however. These include principles of general international law, such as sovereignty and jurisdiction, as well as issues of state responsibility such as legal standards for attribution. Tallinn 2.0 also delves into human rights law, air and space law, the law of the sea, and diplomatic and consular law as they apply to cyber operations.15 Though the views expressed in Tallinn 2.0 are only those of its authors and not necessarily official policy of NATO or its member states, Tallinn 2.0 nevertheless remains a well-recognized and, in many outlets, valued legal resource.

Kickstarting the Big Conversations

The final way Estonia exerts its outsized cyber influence is more abstract than its e-governance system and center of excellence, but is perhaps equally important. Namely, it encourages – and sometimes even initiates – key global discussions that otherwise might not occur. The most prominent example was its decision to admit publicly in 2007 that it was under attack, and to be unusually forthcoming about some of the details. As a result, the issues exposed and lessons learned from the attacks led to much greater international cooperation in cybersecurity.16 A more typical, protect-our-secrets-and-reputation style of communication about the attacks may have delayed much of the progress Estonia and its partners have since enjoyed.

A recent and particularly clever example of Estonia’s knack for convening important cyber discussions is CYBRID 2017.17 Estonia, taking advantage of its turn as President of the Council of the European Union (EU), gathered EU Defense Ministers for an exercise to see how they would respond to a fictional cyber attack. In the exercise scenario, a minor cyber incident slowly and ambiguously evolved into a full-out attack on military communications systems, eventually preventing EU headquarters from communicating with ships operating in the Mediterranean. With each (often unclear) development in the scenario, the Defense Ministers were asked how they would respond. The exercise quickly 
revealed “how difficult it is to evaluate how bad things are,” and presented “bureaucratic roadblocks and geopolitical concerns“ that the ministers had difficulty addressing.18 Primary challenges included determining when and how to communicate with other countries, the public, and critical infrastructure providers.19 Many of these requirements simply hadn’t been thought through before, at least not at the national (and international) policymaker level. But CYBRID 2017 began to remedy that, and one hopes similar exercises will follow.

Looking Ahead

One can arguably say that the word “innovative” gets thrown around too easily in contemporary discussions about people, organizations, and other types of entities. However, with a unique e-governance system, a world-renowned cyber center of excellence, and a creative knack for goading its friends in the right direction on key issues, Estonia can make a more legitimate claim than most to this tired (but still relevant) descriptor. Consequently, its reputation as a major player in cyber and digital issues is well-deserved. Considering its size and only recent entry back into the free world, these distinctions are all the more impressive.

Further, as the world’s everyday activities become increasingly electronic and data-driven, the relative power of states will be determined at least a bit more by digital competencies and capital. Thus, proactive countries like Estonia will be well-positioned to benefit from this shift, whatever its magnitude.

About the Author

Michael Lenart is an Army Strategist on detail to the U.S. State Department. His areas of interest include U.S. and international security issues, cyberspace operations, and organizational change.

1Damien McGuinness, BBC News. “How a cyber attack transformed Estonia.”
3CIA World Factbook.
4visit estonia.
5Ott Ummelas, Bloomberg Politics. “NATO’s Baltic Outpost Digging Cyber Trenches for Europe.”
8Ben Hammersley. “Concerned about Brexit? Why not become an e-resident of Estonia.
10Cooperative Cyber Defence Center of Excellence. About Us.
11Cooperative Cyber Defence Center of Excellence.
12Locked Shields 2017.
13CyCon 2017.
14CyCon U.S.
15Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations.
16Lauri Almann. 10 Years of Cyber Estonia: What will the Next Decade Bring? Panel discussion, Center for Strategic and International Studies, November 6, 2017.
17Caroline Houck. “Cyber Defense is Very Much About Political Decisions.”

Photo credits (in order of appearance): Radio Free Europe Radio Liberty, Operation World, e-estonia, OSET Foundation, NATO, Indian Strategic Studies, Delfi.

Monitoring the Landscape of Cyberspace

posted Nov 5, 2017, 10:25 AM by James Caroland   [ updated Nov 5, 2017, 10:26 AM ]

By Ray Mollison

In my previous article, Building a Cadre of Cyber Intellectuals, it introduces Cyber Intelligence (CYBINT) as an intelligence discipline providing clarity to understand vulnerabilities, exploits, and threats in cybersecurity. Cyber Intelligence can help build a stronger cybersecurity posture by conceptualizing the cyberspace landscape in three levels: operational, tactical and strategic. This will provide to the decision-makers a comprehensive analysis of state actors’ and non-state actors’ capabilities, skillsets, and intentions of their cyber attacks. 

This article will focus on Cyber Threat Intelligence (CTI), which is a sharing platform within a community on current and emerging cyber threat trends within businesses, organizations, and government entities. The future is uncertain if an impenetrable cybersecurity posture could ever exist or if there is a technical solution to stop cyber threats. It is going to take more than firewalls to stop malicious threats and attacks from penetrating computers and systems. To gain an upper hand on combating cyber threats, there is a need to understand the cyberspace landscape of vulnerabilities and exploits. The implementation of CTI could be a tangible solution to enhance the cybersecurity posture against cyber threats.

Gartner best describes Cyber Threat Intelligence as the “evidence-based knowledge, including context, mechanisms, indicators, implications and actionable advice, about an existing or emerging menace or hazard”.1 The collection of raw cyber threat information gathered to evaluate and aggregate actionable intelligence, CTI is performed through the lenses of the intelligence lifecycle: plan, collect, process, produce and disseminate information by focusing on identifying types of indicators of cyber threats such as Malware, Spear-Phishing, Password Attacks, Ransomware and Denial of Service (DOS).2   These cyber threats are examples of what a business, organization and government entity become exposed to within their network daily. This highlights the importance in why networks need to be monitored and controlled to ensure computers and systems are secured against cyber threats. 

CTI is the integration of human intelligence with technical intelligence, allowing an organization to concentrate on existing and emerging threats.3 It is a forward leaning methodology in order to detect possible threat trends in real-time. To understand cyber threats, there are three factors to consider when assessing actors’ motives, which are their Intent, Capability, and Opportunity. 

Intent is a malicious actor’s desire to target your organization
Capability is their means to do so (such as specific types of malware)
Opportunity is the opening the actor needs (such as vulnerabilities, whether it be in software, hardware, or personnel)4

Understanding these three factors can add insight of current cyber threat activities and subsequently project future outcomes by analyzing the actors’ actions, means, and needs. Defining the actors’ motives will help understand their techniques, tactics, and procedures. The methodologies and motives of cyber attacks are the virtual fingerprints of cyber threats; therefore, utilizing a collaborative platform to share real-time threats will add clarity to the composition and characteristics of attacks. Using CTI, a Cyber Threat Analyst examines the actor’s digital fingerprint through aggregated collection sources ranging from technical sources, open sources, and closed sources.5

Technical Sources include the Security Information and Event Manager (SIEM), Intrusion Detection Systems (IDS), firewalls, next-generation endpoint security platforms, and logs from any number of devices
Open Sources such as published vendor reports, any number of free feeds of indicators, vendor vulnerability lists (Microsoft, Apple, Adobe, etc.), and media sources
Closed Sources may include community mailing lists, or organizations such as Information Sharing and Analysis Center (ISACs)

There are many Threat Intelligence Platforms (TIPs) available for threat analysts to aggregate, correlate, and analyze threat data from multiple sources in real-time.6 These platforms offer an advantage to Threat Intelligence Analysts to corroborate threat data to quantify the strength of identifying indicators of potential cyber threats. This platform is designed to be shared across small and large businesses, manufacturers, industries, banks, and government and private organizations in order to improve security within a trusted community. An example of a Threat Intelligence Platform is ThreatStream (Anomali), which was pioneered and founded by Greg Martin.7 ThreatStream is a threat intelligence platform designed to Collect, Optimize, Integrate, and Share.8

Collect: portal to access hundreds of threat intelligence feeds.
Optimize: normalizes and optimizes intelligence, making it more actionable.
Integrate: out of the box integrations with SIEMs, firewalls, and other systems.
Share: offers two-way sharing and secure trusted circles for vetted collaboration.

The advantages to utilizing TIPs is that most organizations are currently using threat intelligence as a part of their cybersecurity program, where it has become valuable to their security mission, and it has become necessary to maximize the value of intelligence data.9 TIPs have become critical to organizations that value a collaborative community and exercise innovative solutions to deter and combat cyber threats. However, there are disadvantages to using TIPs.  They are overwhelmingly complex, have difficulty in platform integration with other security technologies, and suffer a lack of alignment between analyst and operational security events.10
The lack of professional expertise is one of the biggest hurdles to overcome in threat intelligence platforms.11 For example, at the heart of a threat intelligence platform is the Security Operations Center (SOC) where technical information is collected in real-time. The SOC is the nucleus of threat intelligence to examine and evaluate current threat trends by technical experts who aggregate data into actionable intelligence.12 The technical experts monitor an integration of systems in real-time from SIEMs to firewalls. The SOC will need technical experts with the right education and experience to correctly and accurately identify cyber threats. These technical experts must possess the technical knowledge and a broad range of capabilities and diversity of experiences.13 Therefore, the pool of talent will be limited to a select few applicants making it hard to the fulfill roles and responsibilities for this position. 

Figure 114 to the right details the process of threat intelligence as a visual representation. The diagram conceptualizes threat intelligence as an ecosystem referring to it as an interactive organism within interconnected communities or systems. The preservation of the Threat Intelligence Ecosystem is positioned in the center, which is governed by other pyramids: a Threat Intelligence Analyst who collects and analyzes information while the Security Operations Center monitors threats in order for the Leadership to make decisions. These pyramids fortify the epicenter of the ecosystem in conserving and preserving a healthy collection of Threat Intelligence for the Leadership to act upon. Most importantly, the Leadership will be able to understand how and what cyber threats impact the cyberspace landscape for the decision makers to accurately develop strategic and tactical intelligence frameworks. The maturity of strategic and tactical intelligence frameworks can help an organization focus their energy and resources to effectively and efficiently neutralize or degrade cyber threats while stabilizing the cyberspace ecosystem.

CTI will soon become a greater part of businesses, government and private organizations’ cybersecurity portfolios, which can help identify the likelihood of future threats. The utilization of CTI can detect and prevent potential threats, which reinforce a strong cybersecurity posture by having the ability to counter threats before they materialize. The Threat Intelligence Platforms can strengthen the collection of data gathered in real-time for the intent to produce accurate and actionable intelligence reports to prepare and plan for potential cyber threats. This could lead to a stronger defensive security posture of developing Operational, Tactical, and Strategic Cyber Intelligence products that is adaptable and innovative against cyber threats. In addition, these platforms can assist in holistically comprehending the virtual landscape of potential threats deployed within cyberspace. Potential future threats will continue to grow and progressively cultivate new threats. 

About the Author

Ray Mollison is a field-grade officer in the Military Intelligence Readiness Command (MIRC) as an Army Reservist. He is pursuing his Master’s degree in Cybersecurity at the University of South Florida. Ray enjoys working out and spending time with family.

1iSightpartners (2014) What is cyber threat intelligence and why do I need it? [online], wp-content/uploads/2014/07/iSight_Parterns_What_Is_20-20_Clarity_Brief.pdf 
2 cyber threat-intelligence/
3iSightpartners (2014) What is cyber threat intelligence and why do I need it? [online], wp-content/uploads/2014/07/iSight_Parterns_What_Is_20-20_Clarity_Brief.pdf 
4 cyber threat-intelligence/  

Photo credit:  

A Low Likelihood of Cyber Attack on USS MCCAIN

posted Oct 29, 2017, 3:24 PM by James Caroland   [ updated Oct 29, 2017, 3:27 PM ]

By Ian W. Gray

On August 21, 2017, the USS JOHN S. MCCAIN (DDG-56) collided with the merchant vessel Alnic MC1 while transiting East of the Strait of Malacca, one of the busiest chokepoints in the world.  The collision was the second instance of a U.S. warship colliding with a merchant vessel this year2, and the fourth instance of a Naval incident at sea this year3.  All of these accidents have occurred in close proximity to Asia, leading analysts to believe that this could be part of a cyber operation4.  Their hypothesis is seemingly supported by increasing U.S. tensions with China over Freedom of Navigation Operations in the South China Sea5, and provocations from North Korea amid nuclear tests and U.S. supported war games6 in proximity of the Hermit Kingdom.  Despite increasing geo-political tensions, coincidence (or the absence of it) is believed to be a secondary factor.  However, this logic has likely led to a confirmation bias regarding cyber-operations that should be further analyzed. 

“Cyber” has become convenient justification for the loss of availability on infrastructure and equipment where technology plays a predominant role (which encompasses most things these days).  This reasoning is further validated by the covert nature of cyber-attacks, and the recent increase of publicized state-sponsored cyber-operations from actors including China, North Korea, Iran, and Russia.  However, unlike infrastructure and computer servers, ships are transitory and susceptible to a number of additional environmental factors like weather and natural lighting conditions.  Additionally, ships transiting high traffic density areas are competing with a host of other vessels, the efficient performance of their navigation and propulsion systems, and the maintenance and operation by their crews. 

In June 2017, the Baltic and International Maritime Council (BIMCO) updated their “Guidelines on Cybersecurity Onboard Ships7” to include further recommendations on network and cyber security.  The potential vulnerabilities that BIMCO identified include bridge systems, cargo management, propulsion and power control systems, access control, and ship-to-shore communications.  The potential attack vectors, similar to shore based facilities, include brute force, supply chain compromise, phishing and social engineering.  The increasing connectivity and automation of shipboard control systems makes them susceptible to these vectors.  However, several navigation and communication systems are also vulnerable to a loss of availability and integrity, through attacks like jamming and spoofing. 

United States warships have a suite of technology designed to complete multiple complex mission areas; though navigation and propulsion remain paramount to crew safety and operational success. Guidelines for the construction and operations of navigation and propulsion equipment for both merchants and warships is promulgated by the International Maritime Organization’s (IMO) Safety of Life and Sea (SOLAS) convention.  The convention has been updated to include the mandatory adoption of technology like Global Positioning Systems (GPS), Automatic Identification Systems (AIS), and Electronic Chart Display and Information Systems (ECDIS). 

Both warships and merchant vessels could be targeted by GPS spoofing and jamming.  These types of attacks have been demonstrated by China to counter U.S. drones in the South China Sea8, and North Korea to disrupt maritime and air traffic in South Korea9.  Other recent reports indicated a mass GPS spoofing attack in the Black Sea10 off the coast of Russia, and as a method by Iran to exert dominance and control over the Persian Gulf11.  The manipulation would cause shipboard GPS receivers to display a position that is determined by the attacker through broadcasting counterfeit signals.  Such attacks could be part of an anti-access/area denial (A2/ AD) strategy, though likely not the cause of the MCCAIN collision. 

SOLAS requires all ships to carry AIS in order to provide information to surrounding ships and coastal authorities for safety at sea.  AIS, which uses GPS coordinates and radio transmissions, is also susceptible to cyber-attacks, as Trend Micro demonstrated in 201412.  These attacks could include denial-of-service, the appearance of a spoofed vessel, the omission of information about a vessel, or other false information including shipboard emergencies.  This information, if targeted properly, could cause a ship to alter their course or speed, or take additional actions that could endanger the safety of a ship.  AIS is not used as a means of navigation, and any maneuvering decision that a ship takes would likely be verified with alternate means, like radar.  

In 2005, the US Navy began a fleet-wide implementation of ECDIS on surface ships and submarines13, a system that integrates with several navigation sensors and GPS receivers to provide an operational picture for voyage planning and ship movement.  The electronic system has the added benefit of downloadable charts and corrections, which eliminates the need for manual pen-and-ink changes on paper charts.  Though a cyber-induced error could occur from ECDIS, any error that could cause a collision would likely have to compromise a number of other inputs, including civilian and military radars, and GPS. 

While the Navy has developed additional countermeasures to protect their systems from cyber-attacks, the merchant fleet has not uniformly employed similar protections.  Though SOLAS has mandated the implementation of GPS, AIS, and ECDIS, merchant ships have been given a timeline of 2021 to integrate their own cyber risk frameworks14.  Though the likelihood of a cyber-attack against U.S. warships is relatively low, the incident investigation should take into account the cyber risk frameworks of the over 51,000 other merchant vessels transiting the high seas. 

The traffic density of the Strait of Malacca lends credence to a more likely scenario, involving the avoidance of multiple merchant vessels through a heavily trafficked area, while possibly also managing an engineering casualty.  Current reports indicate that MCCAIN possibly suffered a loss of steering prior to the collision, and there is currently no indication of a cyber-attack.  Though there are backup control measures to shift steering from the pilot house to the aft steering control room, this may have not been possible to steer clear of incoming merchant traffic.  Though cyber is becoming an increasing attack vector from state actors, we should be careful of prematurely labeling this incident as a cyber-attack.  

Though two collisions of similar shipboard platforms (Arleigh Burke Destroyers/Flight 1A), along with other accidents in Asia, may appear to be a coincidence, we need to examine the factors leading up to and contributing to the incident.  Most incidents at sea are the culmination of a number of factors, including environmental, situational and material.  Though current events dictate that cyber could possibly be a factor, we should not let the possibility of a cyber outcome guide the analysis of an investigation. While the two Destroyers that were recently damaged this past summer can be repaired, the loss of the sailors cannot.  

About the Author

Ian W. Gray is a senior intelligence analyst at Flashpoint, where he focuses on producing strategic and business risk intelligence reports on emerging cybercrime. Ian is also a military reservist with extensive knowledge of the maritime domain and regional expertise of the Middle East, Europe, and South America.

1H. Beech and M. Haag, “10 Missing After U.S. Navy Ship and Oil Tanker Collide Off Singapore,” Aug. 2017;
2J. Borger, M. Farrer and O. Holmes. “Pentagon Orders Temporary Halt to US Navy Operations After Second Collision,” Aug. 2017;
3S. Ferrechio. “John McCain Supports Navy Operations Pause After Fourth Accident,” Aug. 2017;
4C. Chang. “Hacking Link To USS McCain Warship Collision? Expert Says ‘I Don’t Believe in Coincidence’,” Aug. 2017;
5A. Panda. “China Reacts Angrily To Latest US South China Sea Freedom of Navigation Operation,” Jul. 2017;
6J. McCurry, E. Graham-Harrison, S. Siddiqui. “US Increases Pressure On North Korea After Missile Test,” Jul. 2017;
7The Guidelines On Cyber Security Onboard Ships, white paper. BIMCO. Jul. 2017
8D. Goward. “GPS Spoofing Incident Points to Fragility of Navigation Satellites,” Aug. 2017;
9K. Mizokami. “North Korea Is Jamming GPS Signals,” Apr. 2016;
10S. Goff. “Reports Of Mass GPS Spoofing Attack In The Black Sea Strengthen Calls For PNT Backup,” Jul. 2017.
11I. Gray. “Cyber Threats To Navy And Merchant Shipping In The Persian Gulf,” May 2016;
12Threats at Sea: A Security Evaluation of AIS, white paper. Trend Micro. Dec. 2014
13J.Rhodes and M. Abshire. “U.S. Navy Announces Plans To Convert Fleet to ‘Paperless’ Navigation,” Jul. 2005;
14I. Gray. “Petya Attack Shows The Need For Cybersecurity Rules,” Jun. 2017;

Photo credits (in order of appearance): DoD Live, Wikimedia

Building a Cadre of Cyber Intellectuals

posted Sep 4, 2017, 6:43 AM by James Caroland   [ updated Sep 11, 2017, 12:31 PM ]

By Ray Mollison

Cyber-attacks are growing progressively and evolving rapidly each year which is making it harder to effectively combat cyber threats. One can best understand cyber-attacks through the applications of intelligence to learn “about the cyber adversaries and their methods combined with knowledge about an organization’s security posture against those adversaries and their methods”. [1] Cybersecurity has become a centralized topic of discussion in the government and business sectors where both sides are looking for solutions in a complex cyber world.

Cyber Intelligence (CYBINT) is a marriage between the two disciplines of information technology and intelligence studies. Information technology is the study of creating, processing, storing, securing, and exchanging electronic data. [2] Intelligence is the study of credible and actionable information through collection, analysis and distribution. [3] Even though CYBINT is relatively infant as an intelligence discipline in academia and professional industries, Cyber Intelligence provides clarity to understand cybersecurity vulnerabilities, exploits, and threats. There is a good amount of analysis in information technology or “cyber” type roles using intelligence. [4]

Just like Clausewitz famously identified the three levels of war in his book "On War”, the Cyber Intelligence Task Force from the Intelligence and National Security Alliance identified the same three parallel Levels of Cyber Intelligence: Strategic, Operational, and Tactical. [5] These Levels of Cyber Intelligence can help to acquire key information about U.S. adversaries’ capabilities. The three levels are:

Strategic Cyber Intelligence is to minimize risk to an organization’s critical mission and assets of value by conducting assessments of threats and vulnerabilities. [6]
Operational Cyber Intelligence facilitates analysis to determine the specific threat actors in order to reduce risks to critical information and intellectual property. [7]
Tactical Cyber Intelligence contains the processes of examining priority requirements, collecting data, and developing actionable products. [8]

These Levels of Cyber Intelligence help to deter and neutralize threats through the process of analysis. It is important to note that Joint Intelligence (JP 2-0) publication is the baseline in providing fundamental principles and guidance to enhance the quality of tradecraft in intelligence to support joint operations. [9] This doctrine parallels the Levels of Cyber Intelligence which ensures all intelligence disciplines are crafted with the highest level expertise to minimize mistakes and maximize quality of results for the decision-maker. 

The challenges of cyber are constant and it is vital to continuously gain knowledgeable insight to learn from past and present in order to improve future cyber operations. The Levels of Cyber Intelligence are to define and refine how information is collected through the lenses of data quantification in information technology. As shown in figure 2, the intelligence collection process in cyber must contain the “cycle of collection, analysis, dissemination, and feedback which must be continuous—not a periodic or intermittent—process.” [10]

Filtering information on networks will strengthen the cybersecurity posture to be more proactive rather than reactive. Unfiltered information on networks will weaken the U.S. cybersecurity posture by making it more reactive versus proactive. Cyber Warfare conflicts range from political conflicts, espionage, and propaganda and the types of actors are nation-states, terrorists, and sociopolitical groups. [11] In Cyber Warfare, our adversaries’ intentions are to attack our vulnerabilities which could degrade, disrupt and deny users’ access, or destroy data, servers and networks, or steal personal identification information. The application of Cyber Intelligence is to gain knowledge of our adversaries by learning and studying their virtual footprint in cyber practices and methodologies.

Therefore, the levels of Cyber Intelligence play a role in filtering information to determine the reason of the attack, the intent of the conflict, and the type of malicious actors. Cyber Intelligence relies on fusing Human Intelligence (HUMINT) with timely and accurate Signal Intelligence (SIGINT) to respond to emerging and reemerging threats. [12] HUMINT, SIGINT, and CYBINT are inseparable disciplines and rely on each other together to collect information to achieve actionable and reliable intelligence in Cyber Warfare.

In conclusion, the cyber world will continue to be unstable; however, it can be stabilized by learning about adversaries’ tactics, techniques, and procedures to maintain a superior cybersecurity posture at all three levels of Cyber Warfare – strategic, operational, and tactical. Cyber Intelligence can help build a stronger cybersecurity position by offering insightful knowledge to better defend against an adversarial cyber-attack.

About the Author

Ray Mollison is a field-grade officer in the Military Intelligence Readiness Command (MIRC) as an Army Reservist. He is pursuing his Master’s degree in Cybersecurity at the University of South Florida. Ray enjoys working out and spending time with family.

[1] RSA. Getting Ahead of Advanced Threats. Jan. 2012. Web. < rpt-2.pdf>
[2] Rouse, Margaret. Information Technology. TechTarget. Apr 2015. Web. <>
[3] Duverge, Gabe. Intelligence Studies vs Criminal Justice. POINT PARK University. Mar 2015. Web. <>
[4] TRIPWIRE. An Introduction to Cyber Intelligence. Jan. 2014. Web. <>
[5] Bamford, George, John Felker, and Troy Mattern. Operational Levels of Cyber Intelligence. Cyber Intelligence Task Force, Intelligence and National Security Alliance (INSA) White Paper, 2013
[6] Dennesen, Kristen, Felker, John, Feyes, Tonya, and Kern, Sean. Strategic Cyber Intelligence. Cyber Intelligence Task Force, Intelligence and National Security Alliance (INSA) White Paper, 2014.
[7] Hengel, Steven, Kern, Sean, Limbago, Andrea. Operational Cyber Intelligence. Cyber Intelligence Task Force, Intelligence and National Security Alliance (INSA) White Papers. 2014
[8] Hancock, Geoff, Anthony, Christian, and Kaffenberger, Lincoln. Tactical Cyber Intelligence. Cyber Intelligence Task Force, Intelligence and National Security Alliance (INSA) White Papers. 2015 
[9] Joint Publication JP 2-0. Joint Intelligence. Oct. 2013. Web. <>
[10] Randy Borum, John Felker and Sean Kern. "Cyber Intelligence Operations: More than Just 1s &amp; 0s" Proceedings of the Marine Safety and Security Council: The U.S. Coast Guard Journal of Safety and Security at Sea Vol. 71 Iss. 4 (2014) 
[11] Sanjay Goel. Communications of the ACM. Cyberwarfare: Connecting the Dots in Cyber Intelligence. VOL 54. No. 8. Aug. 2011. Pg 132.
[12] “What is Cyber Threat Intelligence and why do I need it?”. iSIGHTPARTNERS. 2014.
[14] Ezendu, Elijah. “Competitive Intelligence”. Slideshare. Jan. 2, 2010. <>

Cyber Threat Heat-Mapping

posted Aug 25, 2017, 4:49 PM by James Caroland   [ updated Sep 11, 2017, 12:32 PM ]

By MAJ Joe Marty

DISCLAIMER: All content in this article is derived from ideas in the author’s head, based on his experiences and observations. None of the methods or ideas presented describe actual methodologies used by the U.S. Army or any service branch of the Department of Defense. All information disclosed is UNCLASSIFIED.

Most people in the information security field are familiar with the "Cyber Kill Chain," [1] and some are also familiar with its successor in threat-mapping, the more granular MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) matrix. [2] These models allow incident responders, cyber security defenders, and intelligence analysts to chronologically map the activities of intruders. Most threat activities can be categorized under one of the kill-chain stages, and under one of the tactics listed in the MITRE ATT&CK matrix.

Figure 1

    The benefit of modeling threat activities in these frameworks extends into both the past and the future. By identifying what a threat actor has already done, or recognizing what they have been known to do in other incidents/campaigns, incident responders can focus their clean-up and recovery efforts with targeted forensics. By identifying what the threat actor has done in other similar incidents, Cyber Security Service Providers (CSSPs) can focus their hardening efforts towards defense-in-depth strategies that will be effective at preventing the threat actor from succeeding in the next stages of the kill chain that have not yet been executed. 

    Each service branch in the Department of Defense (DoD) has drawn upon the proven methods of their respective domain (land, sea, air) and adapted their operations and apply them in the cyber domain. One classic method of developing intelligence in the tangible domains focused on nation-state threats. This method is sensible for conventional operations because, whether it is an offensive or defensive operation, our military expected to attack or defend against forces of a specific nation, nations, or non-state actors that often operated with similar capabilities and tactics. 

    Adapting this classic methodology to the cyber domain is still effective for offensive operations because targeted cyber effects would typically be directed towards a specific entity. However, the benefits of this classic methodology of developing intelligence to support defensive cyberspace operations (DCO) provide limited tactical benefit because defenders are expected to defend against all threat actors regardless of their origin. Although the classic methodology can provide strategic context and high-level overviews, the tactical activities in DCO are not enabled because the defenders cannot build comprehensive defense-in-depth from stove-piped information. One method that would develop actionable intelligence for DCO would be to use a "heat-map" of the cyber kill-chain or MITRE ATT&CK matrix. 

    Heat-maps traditionally indicate concentration of activity (or whatever is being measured) by a color scale, where a darker color indicates greater concentration. Over time, as more threat activity is mapped, the most common/popular activity will appear darkest on the heat-map. Generating a cyber-threat heat-map will help CSSPs prioritize their defense-in-depth efforts, and enable them to secure their organization by focusing on the most likely attack vectors. Thus, when an intruder encounters the roadblocks built by the CSSP, those seeking easy entry will move on, and the persistent threat actors will be forced to change their behavior to succeed in their campaign. At worst this will delay their activities; at best, it will deter adversaries from continuing their pursuit, encouraging them to move on to “lower-hanging fruit” or another vector with less resistance.

    Using this cyber-threat heat-mapping methodology, an organization could populate a database with documented activities. [3] The events they observe and record could be categorized by kill-chain stage and MITRE ATT&CK method, and then tagged by threat actor. This database would enable analysts to quickly respond to identified threats because, as soon as observed events are queried in the database, the analyst can easily spot what the intruder has most likely done so far, and what they are most likely to do next, based on their documented pattern of behavior.

    To maximize accessibility, the organization could build a simple interface to the database (e.g., web page front-end) that allows defenders to quickly identify the most popular/common attack vectors, enabling them to focus their efforts on where they will be most effective. The threat actor tags for each event allows for simple data correlation of queries with documented activities stored in the database. This enables quick identification of the APT that is most likely responsible for the observed activity based on the matching data points. This threat-hunting heat-map would enable intelligence analysts to provide actionable intelligence to defenders in cyberspace.

Figure 2

    Figure 2 (above) is an illustration of how activities during an observed campaign could be documented and tagged across the cyber kill-chain. Each row below the kill-chain stages indicates a separate (hypothetical) campaign. Each activity tagged for a specific APT indicates attribution of similar behavior based on analysis of past events. [Note: The activities and corresponding APTs are provided only to demonstrate how the interface might be used – the attribution is intentionally inaccurate, and the figure should not be used as a reference.]

    The benefit of using an interface like this should be clear – the more tags that appear across a row, the more likely it is that the corresponding APT is the culprit of the campaign. Depending on which stage of the kill-chain spun up the incident response team (IRT) into action, the analysts would be able to quickly identify what the intruder has already done, and they can advise the CSSP on where to implement the most effective countermeasures further down the kill-chain, both based on expected behavior supported by historical data in the database.

    Figure 3 below is a similar illustration using the MITRE ATT&CK matrix. Optimization of the interface becomes critical for this model because data can become confusing very quickly if not properly presented. This illustration presents another hypothetical example of a single campaign where each observed activity is documented, and the APT tag indicates which threat actor has demonstrated the behavior in past campaigns that have been analyzed. The dotted lines link activities observed by the same threat actor. [Again, attribution is intentionally wrong.]

Figure 3

    This example visually expresses which threat actor most likely conducted the campaign based on recorded behaviors. In this hypothetical example, the campaign is equally likely to have been prosecuted by APT 1 or APT 29, as three activities observed from each matched tagged entries in the database of recorded APT behaviors.

    The real value of following this methodology is the heat-map. Figure 4 below depicts how the heat map develops over time as more tagged data is recorded in the database. When an analyst displays ALL recorded threat activity, the darkest points indicate the most common tactics and methods used by APTs. 

    Once defensive countermeasures are identified for each tactic listed in the ATT&CK matrix, the ‘hot-spots’ in the heat map can quickly spotlight where a CSSP should prioritize its defense-in-depth efforts. In this example, the ‘hottest’ APT tactics that should be addressed are account enumeration, remote desktop protocol (RDP), and removable media. These observations might lead the CSSP to create fake accounts to detect account enumeration, implement multi-factor authentication for RDP access, and whitelist the removable media they use to prevent usage of unauthorized removable media.

Figure 4

    The classic, nation-centric development of threat intelligence may provide strategic context in support of DCO, but the usefulness is much more limited down at the tactical level. The use of a heat-map overlay on either the cyber kill-chain or ATT&CK matrix can enable responders to identify, contain, and recover from intruder activities (i.e., forensics). The cyber threat heat-map can also enable defenders to prioritize their efforts where they will be most effective (i.e., build defense-in-depth). Cyber threat heat-mapping provides actionable intelligence for the tactical defensive cyberspace operators, and it helps CSPs maximize their efficiency and effectiveness in defending their organization.

About the Author

Joe Marty leads a Cyber Protection Team (CPT) as a field-grade officer in the US Army Cyber Protection Brigade. He has experience conducting several incident response and proactive defensive cyberspace operations with his team in both Enterprise and Industrial Control Systems (ICS) environments. When he's not on the road leading his team, Joe enjoys writing, hacking, and traveling with his family.


What We Can Learn About Cyber Security from the Cold War and the Global War on Terrorism

posted Aug 18, 2017, 6:39 AM by James Caroland   [ updated Sep 11, 2017, 12:32 PM ]

By Dan Cahill, Commander, United States Navy

Cyber Security/Defense is often presented as a complex and expensive problem. However, if viewed through the proper prism, the fundamentals can be distilled down to a few lessons from history like the Cold War and the “Global War on Terrorism.” When considered in this context, the solutions become clearer and more cost effective.

If the Cold War taught the U.S. one thing, it should be that armies don’t win wars, economies do. A corollary to this would be that solid business principles build economies and win wars. While the U.S. was building its overall economy, the Soviet Union was building up its military. Non-Military Soviet manufactured goods could not compete on the world stage and were limited to Warsaw Pact/Soviet Bloc nations. Throughout the Cold War, the U.S. had a manufacturing based, export oriented economy. The U.S. supplied the world with high quality manufactured goods and the U.S. economy grew by leaps and bounds.

During the Korean War, in the early 1950s, the U.S. spent 15% of its Gross Domestic Product (GDP) on military spending which dropped precipitously to just over 10% at the end of the Vietnam War and stayed below 8% from 1972 onward [1]. In contrast, up until the early 1980s, the Soviet Union contributed 15-17% of its GDP towards military expenditures with increases of 4% to 7% per year since the end of World War II [2]. When considered in the context of the Cold War, this represents highly disparate expenditures.

The Soviet Union attempted to keep up with the U.S. in military spending/power projection. The problem for the Soviet Union was that the U.S. economy, for much of the Cold War, was three times larger than the Soviet economy [3]. The U.S. beat the Soviet Union by drawing it into a fight the Soviet Union could not win and one that was fought by only two parties: the North Atlantic Treaty Organization and the Warsaw Pact.

Fast forward to September 11th, 2001; a terrorist operation that probably cost less than one million dollars prompted a multi-trillion dollar response; this is 1 x 106 versus 1 x 1012 (a million to one). This demonstrates the effectiveness of asymmetrical warfare; the damage far exceeds the cost to produce it.

If we apply these principles to the cyber realm, we see that the U.S. Government, and more specifically the U.S. Department of Defense, is fighting a much larger economic war than what the U.S. fought during the Cold War. Unlike the Cold War, where the U.S. had an economy three times larger than its adversary and was pitted against the Soviet Union in a dollar for dollar war, the cyber-landscape is much different. Virtually every country in the world and most every company in the world which relies upon the Internet to conduct business is in the market for Cyber Security solutions. In 2016, worldwide spending on Cyber Security was nearly 74 billion U.S. Dollars (USD) [4]. The entire U.S. Defense budget for 2016 was approximately 585 billion USD [5]. By 2020, worldwide Cyber Security spending is projected to reach over 100 billion USD, which would be 1/5 of the entire U.S. Defense budget. [6]

The U.S. Department of Defense, or the U.S. government for that matter, cannot and should not attempt to compete simultaneously with the European Union, China, Russia, Microsoft, Apple, Google, Exxon and virtually every other entity in the world that utilizes the Internet to conduct business. If it tried, with the U.S. economy being only approximately 25% of the world economy, it would have to spend 4 to 1 against the rest of the world [7]. If the U.S. wants to compete in the 21st century, it needs to look at Cyber Defense/Security in business terms and not try to compete with what is already a functioning marketplace for cyber-related risk management. The better approach is to spend simultaneously on developing effective offensive cyber weapons, decoupling mission critical national security information from the Internet by placing it on classified networks, and letting the soon to be 100 billion USD Cyber Security market and 2,500 billion (2.5 trillion) USD insurance industry develop solutions to protect non-mission critical national security information and private industry networks and data [8].


[1] Council on Foreign Relations.  “Trends in U.S. Military Spending”.  Accessed July 15, 2017.
[2] Federation of American Scientists. “Russian Military Budget”. Sept 7, 2000.
[3] The Maddison-Project,, 2013 version.
[4] Fortune Magazine. “Here’s How Much Businesses Worldwide Will Spend on Cybersecurity by 2020”.  Accessed Jul 13, 2017.
[5] The U.S. Department of Defense. “The FY-2016 Budget Proposal”. Accessed Jul 13, 2017.

[6] Fortune Magazine. “Here’s How Much Businesses Worldwide Will Spend on Cybersecurity by 2020”.  Accessed Jul 13, 2017.

[7] The World Bank. “Gross Domestic Product 2016”. Accessed Jul 31, 2017.

[8] Swiss Re.  Global insurance industry grows steadily in 2015 amidst moderate economic growth but outlook is mixed, Swiss Re sigma report says”.  Accessed Jul 13, 2017.

Photo credits (in order of appearance):  Wikimedia Commons, Wikipedia

About the Author

Daniel Cahill holds a commission as a Commander in the United States Navy and serves in the U.S. Navy Reserve where he supports the Naval Inspector General, including oversight of the U.S. Navy’s Cyber Security program.  He holds a Bachelor’s Degree in Marine Engineering, with a concentration in Nuclear Engineering, from the United States Merchant Marine Academy.  He earned graduate certificates in both International Relations and Business from New York’s Columbia University, where he is currently a Masters candidate in their Enterprise Risk Management (ERM) program.  Commander Cahill's academic work has focused on applying business principles to government decision making and resource allocation.

1-10 of 53