Is unexpected behavior in a computer program necessarily a vulnerability? Why or why not?


According to Pfleeger, Pfleeger & Margulies (2015), programming flaws can cause integrity problems which lead to harmful output or action, and these programming flaws offer an opportunity for exploitation by a malicious actor (p. 162). Agreed, but I believe the question is, does this imply that unexpected behavior is always a function of a programming flaw and if there is a programming flaw, has it created a vulnerability which can be exploited? I think this is a hard question to answer without a deeper more refined definition of “unexpected behavior”. I am sure many remember the first Basic program they every wrote, something like:

10 print “Name”
20 goto 10

The addition of a trailing semi-colon and spaces between Name and the trailing quote on line ten (10 print “Name     “;) will alter the output, ten trailing spaces produce output that is different from twenty trailing spaces, and while the behavior may be unexpected, it does not indicate a vulnerability.

Most modern programming languages have constructs to trap exceptions. Constructs like the try/catch/ finally attempt to trap exceptions and hopefully exit the condition gracefully logging the error.

try {
execute code }
catch (error) {
log error if try thows an execption }
finally {

Many modern applications leverage these constructs, but it’s certainly possible to deliver working code which contains no exception handling at all. There is an abundance of code in the wild that is highly vulnerable for a myriad of reason, ranging from bad programming to situations that were never considered and thus not addressed. Legacy systems like programmable logic controllers (PLCs) running code from a time when the world was not connected, and security was not a concern contain some serious vulnerabilities.

Agile and DevOps movements have dramatically accelerated the frequency of software releases. It’s common practice to release software containing known and/or documented defects which are identified during testing cycles but not flagged as show stoppers, meaning the release cycle continues. These defects are not vulnerabilities but rather known bugs, typically with documented workarounds, essentially nondesirable expected behavior rather than unexpected behavior. Shorter release cycles are accompanied by an increase in unexpected behavior and offset by rigorous version control, A/B testing and automation which automates the rollback to known good system. Systems fail faster today, and rollbacks happen even more quickly. There is irony here, systems which have life or death implications have slow (very slow) release cycles (e.g. – it’s hard to do frequent software releases and tolerate known defects when talking about the heart-lung machine). These systems tend to be arcane and often vulnerable because they were never architected to live in the connected world, they value predictability and stability over functionality.

Exception handling along with verbose logging and the creation of audit trails have become standard practices. In the days of top-down systems is was easy for the developer to own the user experience but the dawn of event-driven made this much harder and logging is now a critical aspect of every system. The focus of many security firms is no longer to keep those exploiting vulnerabilities out but rather to keep them in, find them and determine what they are trying to do (


A New Avenue of Attack: Event-driven system vulnerabilities. (n.d.). Retrieved March 15, 2017, from

Error Handling. (n.d.). Retrieved March 15, 2017, from

Manifesto for Agile Software Development. (n.d.). Retrieved March 15, 2017, from

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

Press, A. (2016, August 12). Keeping hackers out no longer the best security strategy, FireEye says. Retrieved March 15, 2017, from