In the summer of 2015, the St. Louis chapter of the Military Cyber Professionals Association (MCPA) hosted its first annual cyber Capture The Flag (CTF) event to promote science, technology, engineering, and mathematics (STEM) education and cyber awareness in the St. Louis, MO area. As is typical with any first annual event, we were able to identify items that, as coordinators, we could improve upon for future events. Specifically, the theoretical platform that was designed to keep score of the competition did not meet our practical needs and required significant modification to be ready. Following the event, we decided to write our own. Despite all of the challenges, we believe the event was a tremendous success and we now have our own scoring platform for future events. This will help our neighboring chapters of the MCPA (and other similar organizations) to more easily hold CTF events.

CTFs are proving to be an excellent way to teach cybersecurity and raise awareness by “gamifying” cyber. These events typically involve having players compete by solving cyber security related challenges in categories such as cryptography, reverse engineering, network exploitation, and forensics. Players are awarded points for solving these challenges and typically the person or team with the most points at the end of the competition is declared the winner. Difficulty in these competitions can vary widely; the scoring platforms however, typically do not support that same range in difficulty.

While developing the competition in St. Louis, our goal was to develop a CTF focused on teaching. After conducting some research, we decided to use the open source PicoCTF Platform 2 built by Carnegie Mellon University. Their platform was designed for the annual PicoCTF which is a national High School level competition. Initially, it seemed like a great choice for our event, as it was designed primarily to support a teaching CTF. Unfortunately, we experienced some issues and lack of functionality that none of the scoring platforms we found could resolve or provide. In general, the platform was difficult to work with and required that all of the challenges be built into the system via an undocumented programming interface. Further, once the challenges had been added, they were nearly impossible to modify or remove.

While developing the competition, we distributed challenge development amongst several individuals. Due to the issues we had with the programmatic interface for challenge deployment, there were problems with incorrect answers in the database so when competitors entered the correct solution they were not awarded with the points they had earned. Further, once we were able to manually update the solution in the database, the system would not allow competitors to enter the correct solution. We were able to overcome these obstacles during the competition, but they put us in an embarrassing position. Unfortunately, this was not our only issue with PicoCTF Platform.

One of the features we implemented involved having hints available during the competition. As competition designers, we had participated in a few CTFs and observed that having hints available greatly improved our ability to learn what was being tested. For example, the SANS platform has a way to offer students hints for a cost so that they can choose to give up some of the points that they had earned in order to gain some insight into how a challenge might be solved. To our surprise, none of the open source scoring platforms offered a way for us to implement this feature. So we decided to implement the feature ourselves.

For our modified version of PicoCTF, we built a dynamic hint system. Initially, we designed the system to simply provide hints to solve challenges costing competitors points. Our idea was to provide increasingly informative hints making a correct solve worth fewer points based on how many hints were used. To standardize, we wrote four hints for each challenge, where each hint subtracted 25 percent from the total original challenge points. For example, if a challenge was worth 100 points, the first hint requested would subtract 25. By the fourth hint, the player or team would not receive any points, but they would know how to solve the challenge. We found this approach to be tremendously successful. We had several competitors that admitted that there would have been no way to solve many of the challenges without the hints given their existing skillsets. Even more exciting, many of the competitors that used hints stated that they had learned more at our CTF than they had learned at previous events.
After deciding to host this even annually, we knew we would have to write our own platform, since the platform we wanted did not exist. The result of our effort is called HackTheArch and is a Ruby on Rails web application that implements all of the features we found other platforms to be missing. It includes a web-interface for adding and modifying challenges, a dynamic hint system we had built into PicoCTF Platform, and finally, a tiered bracket system allowing competition designers to offer a variable number of the hints to each bracket. We’ve not seen such a handicap system employed in CTF and are excited to see how it gets used.

We think that this platform can be used in all types of CTFs, so we released the source code on Github under the MIT license and are looking forward to continued improvement and development. To help encourage crowd-sourced development, we have also built the platform with a comprehensive test harness to catch regression errors. Moreover, the source code follows standard Ruby on Rails conventions, making it easier for developers to get started. We even spent several hours attempting to break the system to prove to ourselves it was safe to use in a hacking competition, and have plans to do the same after each major release. To date, we have not found any serious vulnerabilities.

Beyond allowing the community to help develop the platform, we believe that this package removes much of the complications of other scoring platforms by giving flexibility to the competition designers. The platform is extremely easy to get up and running with very little web development experience required, and post deployment does not require any development knowledge because of the web-interface challenge and hint deployment/administration.

As validation of our effort, the platform has received a great deal of attention from the new US Cyber Command, Cyber Mission Teams (CMTs) and specifically, the Cyber Protection Teams (CPTs) at Scott Air Force Base, Il. The platform is being recommended as a standard approach for standardization, training, and evaluation to the greater CMT community and has support from within the 24th Air Force (AFCYBER). The platform is being looked at to replace paper based evaluations and to score competitive events between teams. Furthermore, if employed in this way, evaluation questions could easily be replicated and standardized between teams via existing SQL import/export tools. Additionally, the civilian cybersecurity training website, Cybrary.it, is also evaluating the platform as a way of administering tests for planned certifications. A non-profit in the Baltimore area is also using it to score a CTF for under-served youth.

Our goal was to enable more CTF competitions with less overhead because CTF has already proven to be an effective teaching tool. We have presented only a minor tweak to that successful model and we cannot wait to see how it gets used. We wanted competition developers and competitors to be able to focus on the challenges instead of the scoring platform. We believe that the development of this platform helps achieve our goals and even further invests in our Nation’s future by helping educators more effectively teach crucial cybersecurity fundamentals. 

Beyond allowing the community to help develop the platform, we believe that this package removes much of the complications of other scoring platforms which gives more flexibility to the competition designers. The platform is extremely easy to get up and running with very little web development experience required and post deployment does not require any development knowledge due to the web-interface for challenge and hint deployment and administration.

As validation of our effort, the platform has received a great deal of attention from the new US Cyber Command, Cyber Mission Teams (CMTs) and specifically, the Cyber Protection Teams (CPTs) at Scott Air Force Base, Il. The platform is being recommended as a standard approach for standardization, training, and evaluation to the greater CMT community and has support from within the 24th Air Force (AFCYBER). The platform is being looked at to replace paper based evaluations and to score competitive events between teams. Furthermore, if employed in this way evaluation questions could easily be replicated and standardized between teams via existing SQL import/export tools. Additionally, civilian cybersecurity training website CybraryIT is also evaluating the platform as a way of administering tests for planned certifications and a non-profit in the Baltimore area is using it to score a CTF for underserved youth.

Our goal was to enable more CTF competitions with less overhead because CTF has already proven to be an effective teaching tool. We have presented only a minor tweak to that successful model and we cannot wait to see how it gets used. We wanted competition developers and competitors to be able to focus on the challenges instead of the scoring platform. We believe that the development of this platform helps achieve our goals and even further invests in our Nation’s future by helping educators more effectively teach crucial cybersecurity fundamentals.

About the Author: Paul Jordan

Paul Jordan is an Air Force Cyberspace Operations Officer, former President of the MCPA St. Louis Chapter, and graduate student at the Air Force Institute of Technology.