Richard J. Bocchinfuso

"Be yourself; everyone else is already taken." – Oscar Wilde

FIT MGT5114 – Wk3 Discussion 1 Peer Response

I enjoyed reading your post, and I appreciate your comments on my post. Sometimes it’s easy to forget the tools I (we) use every day to protect information because we don’t trust broader access controls. I have been using tools like AxCrypt and VeraCrypt (previously TrueCrypt) for years to protect personal data, similar to Microsoft Bitlocker. My company used a full disk encryption for a while which required you to enter a password before booting your laptop; the idea was that all data on the hard drive was encrypted so If the laptop was lost or stolen someone could not pull the drive, connect to another machine and start perusing data. I hated the laptop encryption, it was a good concept, the software-based encryption slowed down the computer tremendously. Software-based encryption full volume encryption on a laptop just crushed I/O performance making it impractical. I think you bring up an excellent point regarding things like public drives and even network shares or other network-based technologies where we assume our data is secure, confidential and guaranteed authentic but it practice this is a bigger challenge than many realize. I work with organizations of varied sizes, from the Fortune Ten to SMB and I have always been amazed by the power of the IT guy/gal and how the desire for simplicity often gives way to massive security issues. Group shares like HR, legal, etc… and user shares in departments that should be highly confidential with root or administrative privileges removed so often are fully accessible by IT administrative users. It’s understandable why but no less concerning. The removal of root or administrative privileges greatly complicates tasks like backups and migrations, and these are tasks that IT organizations (the IT guys/gals) perform all the time and often lead to practices which create security holes. Granular user controllable permission which orchestrated from an API and a move toward guaranteed authenticity became popular with content-addressable storage (CAS) and today the properties of CAS, are part of object-based storage systems like Amazon (AWS) S3.

Let’s look at the following example:

The original file, iou.txt says the following: “John Doe owes Jane Smith $1,000.00”
Below you can see I create a file (set-content) with the contents above, I output the contents of the file (set-content), I display the file attributes (get-itemproperty) and then I hash the file (get-filehash). The file hash is very import.

PS D:\Downloads\week3> Set-Content .\iou.txt ‘John Doe owes Jane Smith $1,000.00’
PS D:\Downloads\week3> Get-Content .\iou.txt
John Doe owes Jane Smith $1,000.00
PS D:\Downloads\week3> Get-ItemProperty .\iou.txt | Format-List

Directory: D:\Downloads\week3

Name : iou.txt
Length : 36
CreationTime : 3/26/2017 5:55:46 PM
LastWriteTime : 3/26/2017 5:55:46 PM
LastAccessTime : 3/26/2017 5:55:46 PM
Mode : -a—-

PS D:\Downloads\week3> Get-FileHash .\iou.txt -Algorithm MD5 | Format-List

Algorithm : MD5
Hash : 17F6B6FB31AAEB1F37864667D87E527B
Path : D:\Downloads\week3\iou.txt

Now let’s compromise the file, let’s assume I am John Doe the IT guy with access to global administrative privileges. Let’s also consider that most people don’t take a hash of their files when they save them to ensure authenticity.

Below I overwrite the contents of iou.txt (set-content) to state that Jane now owes John $100,000 dollars, a pretty significant change.
I display the contents of iou.txt (get-content) to validate that the modification was made. I then display the file attributes (get-itemproperty), here you can see that the file size is the same, and the only attribute that changes is the LastWriteTime, significant attribute but we will make sure we set that to match the attribute before we tampered with the contents of the file.
Next is the hash of the file contents (get-filehash) which shows a different hash, this is a hash of the file contents, but remember that most people don’t hash their files and store the hash to guarantee authenticity. The hash is a powerful tool in determining authenticity.
Next, I set the CreationTime, LastWriteTime and LastAccessTime to ensure they match the original file.
Listing the file attributes again you can see now everything matches the original file, same name, file size, timestamps, etc…
The only things we have as evidence that the file was changed is the differing hash.

PS D:\Downloads\week3> Set-Content .\iou.txt ‘Jane Smith owes John Doe $100,000.’
PS D:\Downloads\week3> Get-Content .\iou.txt
Jane Smith owes John Doe $100,000.
PS D:\Downloads\week3> Get-ItemProperty .\iou.txt | Format-List

Directory: D:\Downloads\week3

Name : iou.txt
Length : 36
CreationTime : 3/26/2017 5:55:46 PM
LastWriteTime : 3/26/2017 6:08:28 PM
LastAccessTime : 3/26/2017 5:55:46 PM
Mode : -a—-

PS D:\Downloads\week3> Get-FileHash .\iou.txt -Algorithm MD5 | Format-List

Algorithm : MD5
Hash : FB86680C6A90402598A2A1E4A27AA278
Path : D:\Downloads\week3\iou.txt

PS D:\Downloads\week3> $(Get-Item iou.txt).creationtime=$(Get-Date “3/26/2017 5:55:46 PM”)
PS D:\Downloads\week3> $(Get-Item iou.txt).lastaccesstime=$(Get-Date “3/26/2017 5:55:46 PM “)
PS D:\Downloads\week3> $(Get-Item iou.txt).lastwritetime=$(Get-Date “3/26/2017 5:55:46 PM “)
PS D:\Downloads\week3> Get-Content .\iou.txt
Jane Smith owes John Doe $100,000.
PS D:\Downloads\week3> Get-ItemProperty .\iou.txt | Format-List

Directory: D:\Downloads\week3

Name : iou.txt
Length : 36
CreationTime : 3/26/2017 5:55:46 PM
LastWriteTime : 3/26/2017 5:55:46 PM
LastAccessTime : 3/26/2017 5:55:46 PM
Mode : -a—-

PS D:\Downloads\week3> Get-FileHash .\iou.txt -Algorithm MD5 | Format-List

Algorithm : MD5
Hash : FB86680C6A90402598A2A1E4A27AA278
Path : D:\Downloads\week3\iou.txt

Note:  All f the above example and commands were executed on a Windows host using PowerShell.

References:

Compliance: Governance, Authenticity and Availability. (n.d.). Retrieved March 26, 2017, from http://object-matrix.com/solutions/corporate/finance/compliance/

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall. Edited on 03/22/2017 at 07:35:PM EDT

FIT MGT5114 – Wk3 Discussion 1 Post

Question:

The traditional protection levels used by operating systems to protect files are read, write, and execute. What are some other possible levels that a user may wish to apply to files, folders, code, etc.? Justify your answers with examples.

Response:

File and folder permissions are governed slightly differently based on operating system type, but the constructs are the same. On Unix and other POSIX-compliant systems (Linux, Android, MacOS, Windows NTFS, etc…) file and folder permissions are managed using a user, group, others (or world) model.

For example:
foo.bar sticky bit | owner | group | world
foo.bar – | rwx | r-x | r-x (-rwxr-xr-x)

Files and folders can have permissions quickly set for Owner, Group and World by using the numeric value for the permission mask.
r (read) = 4
w (write) = 2
x (execute) = 1

To assign the file “foo.bar” the permission mask of:
owner = rwx
group = r-x
others = r-x
The command would be “chmod 755 foo.bar”

Unix based systems leverage three additional permission sticky bit, setuid and setgid.
When the setuid permission is set the user executing the file assumes the permissions of the file owner.
When the setgid permission is set the user executing the file is granted the permissions based on the group associated with the file.
When the sticky bit is set a file or directory can only be deleted by the file owner, directory owner or root.

These special permissions are set in the following fashion:
sticky bit = 1000
setgid = 2000
setuid = 4000

Same idea as setting file permissions to set the sticky bit on foo.bar with full permissions the command would be “chmod 1777 foo.bar. To setgid and setuid with rwx permissions for the owner and no read only permissions for the group and others the command would be “chmod 6744 foo.bar”.

Windows based systems follow a similar file and folder permissions construct at least on systems using the POSIX-compliant NTFS file system (most modern Windows OSes). Older Microsoft Operating Systems like MS-DOS (FAT16 file system) and Windows 95 (FAT32 file system) use file attributes (Read-Only or Read-Write) rather than a full permission systems.

Permission inheritance is an important concept, the setgid and setuid are use to facilitate inheritance, the application is slightly different on Windows Operating Systems, but the premise is the same.

Source code can be protected in various ways outside of just file permissions. One option is to compile the code making it executable but not readable. Compiled languages like C++ compile into machine code; these compiled binaries are not easily decompiled, another option is to use a bytecode compiler often used with interpreted languages like Python, Perl, Ruby, etc… Machine code needs to be compiled for specific architectures, for example, x86, x64 and ARM would require three separate binaries while bytecode compiled binaries would work across architectures. The downside with bytecode compiled binaries is that most of the source code is contained in the compiled binary making it far easier to decompile.

Daemons and like auditd provide the ability to maintain detailed audit trails on file access. Systems like Varonis provide the ability to audit and verify permissions to ensure that the proper permissions are assigned to files and folders.

Outside of file and folder permissions, there are application level permissions such as RDBMS permissions which determine how a user can interact with the RDBMS and the data it houses. Object store permissions like AWS S3 offer an authorization model which is similar to filesystem permissions, and these permissions are typically managed via API using standard authentication methods like OAuth2 and SAML token based authentication. NAC or Network Access Control is a system which controls network access and manages security posture. Revision Contol Systems like Git use Access Controls to protect source code, in the case of Git these ACLs are very similar to UNIX-based ACLs. Many systems today which leverage REST and SOAP APIs to access date use tokens and keys to authenticate users and grant rights. I just finished working on some code today (https://gist.github.com/rbocchinfuso/36f8c58eb93c4932ec4d31b6818b82e8) for a project which uses the Smartsheet API and token based authentication so that cells can be updated using a command from Slack. This code authenticates using a token contained in an unpublished config.inc.php file and allows fields in a Smartsheet to be toggled using a command similar to “ssUpdate rowID,columnID,state”. Token based authentication, in this case, can provide VIEWER, EDITOR, EDITOR_SHARE, ADMIN and OWNER (https://smartsheet-platform.github.io/api-docs/#authentication) privileges while being stateless and without requiring user and password authentication.

References

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

FIT MGT5114 – Wk2 Discussion 1 Peer Response

Response 1:

Good points.  To play devil’s advocate here do you think that the scenario you put forward regarding OS or firmware upgrades with older or unsupported devices is likely to increase the probability of introducing unintentional vulnerabilities?

Response 2:

I agree that is a “yes and no sort of question”.  I like your example about clicking on the “You will never believe what So and So Famous Person is doing now” because it highlights the idea that the user is experiencing an unexpected behavior and thus the probability of malicious activity is likely greater.  IMO the complexity here lies in determining if the unexpected behavior indicates a vulnerability or threat.

FIT MGT5114 – Wk2 Discussion 1 Post

Question:

Is unexpected behavior in a computer program necessarily a vulnerability? Why or why not?

Response:

According to Pfleeger, Pfleeger & Margulies (2015), programming flaws can cause integrity problems which lead to harmful output or action, and these programming flaws offer an opportunity for exploitation by a malicious actor (p. 162). Agreed, but I believe the question is, does this imply that unexpected behavior is always a function of a programming flaw and if there is a programming flaw, has it created a vulnerability which can be exploited? I think this is a hard question to answer without a deeper more refined definition of “unexpected behavior”. I am sure many remember the first Basic program they every wrote, something like:

10 print “Name”
20 goto 10
run

The addition of a trailing semi-colon and spaces between Name and the trailing quote on line ten (10 print “Name     “;) will alter the output, ten trailing spaces produce output that is different from twenty trailing spaces, and while the behavior may be unexpected, it does not indicate a vulnerability.

Most modern programming languages have constructs to trap exceptions. Constructs like the try/catch/ finally attempt to trap exceptions and hopefully exit the condition gracefully logging the error.

try {
execute code }
catch (error) {
log error if try thows an execption }
finally {
cleanup
}

Many modern applications leverage these constructs, but it’s certainly possible to deliver working code which contains no exception handling at all. There is an abundance of code in the wild that is highly vulnerable for a myriad of reason, ranging from bad programming to situations that were never considered and thus not addressed. Legacy systems like programmable logic controllers (PLCs) running code from a time when the world was not connected, and security was not a concern contain some serious vulnerabilities.

Agile and DevOps movements have dramatically accelerated the frequency of software releases. It’s common practice to release software containing known and/or documented defects which are identified during testing cycles but not flagged as show stoppers, meaning the release cycle continues. These defects are not vulnerabilities but rather known bugs, typically with documented workarounds, essentially nondesirable expected behavior rather than unexpected behavior. Shorter release cycles are accompanied by an increase in unexpected behavior and offset by rigorous version control, A/B testing and automation which automates the rollback to known good system. Systems fail faster today, and rollbacks happen even more quickly. There is irony here, systems which have life or death implications have slow (very slow) release cycles (e.g. – it’s hard to do frequent software releases and tolerate known defects when talking about the heart-lung machine). These systems tend to be arcane and often vulnerable because they were never architected to live in the connected world, they value predictability and stability over functionality.

Exception handling along with verbose logging and the creation of audit trails have become standard practices. In the days of top-down systems is was easy for the developer to own the user experience but the dawn of event-driven made this much harder and logging is now a critical aspect of every system. The focus of many security firms is no longer to keep those exploiting vulnerabilities out but rather to keep them in, find them and determine what they are trying to do (http://www.mercurynews.com/2015/02/10/keeping-hackers-out-no-longer-the-best-security-strategy-fireeye-says/).

References

A New Avenue of Attack: Event-driven system vulnerabilities. (n.d.). Retrieved March 15, 2017, from http://www.isg.rhul.ac.uk/~simos/event_demo/

Error Handling. (n.d.). Retrieved March 15, 2017, from https://www.owasp.org/index.php/Error_Handling

Manifesto for Agile Software Development. (n.d.). Retrieved March 15, 2017, from http://agilemanifesto.org/

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

Press, A. (2016, August 12). Keeping hackers out no longer the best security strategy, FireEye says. Retrieved March 15, 2017, from http://www.mercurynews.com/2015/02/10/keeping-hackers-out-no-longer-the-best-security-strategy-fireeye-says/

FIT MGT5114 – Wk1 Discussion 2 Peer Response

I enjoyed reading your post. Many hobbyist black hat hackers and script kiddies are pretty lazy. I understand the concept of never connecting to a Starbucks WiFi to avoid a man-in-the-middle (MitM) attack it probably a bit unreasonable. Security is often like locking the front door of our home or our car door when we leave it unattended, and these are merely deterrents because most thieves are pretty lazy, they will walk around the mall pulling door handles until one opens so the deterrent is quite effective. The same idea often applies to network security, if we use something like a VPN to encrypt communication while far from unhackable it’s likely enough to have the individual perpetrating the man-in-the-middle attack pass over us and look for easier prey. In a world where our lives are conducted online, my personal philosophy is to lock the door of the house, and the car when I leave it unattended but never leaving the house or the car unattended is probably unreasonable.

If you’ve never seen how easy it is to conduct a MitM attack here is a good instructional video (https://www.youtube.com/watch?v=IdhuX4BEK6s) that shows how to use the WiFi Pineapple to carry out a MitM attack. Scary simple. 🙂 It’s much harder to crack the encrypted captured data so if someone is sitting at a Starbucks with a WiFi Pinnable conducting a MitM attack with either a rogue access point or by spoofing an access point (evil twin AP) the probability of the perpetrator spending the time to decrypt the data is low. A MitM attacker will likely move on to someone else who is passing unencrypted traffic.

References

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

FIT MGT5114 – Wk1 Discussion 1 Peer Response

I enjoyed reading your post. Long, complex passwords have become an essential security measure. I am an aspiring ethical hacker, and one of my hobbies is cracking hashed passwords. Ten years ago cracking a nine character upper and lower case alphanumeric password would have been highly improbable. Today you can grab an AWS p2.16xlarge instance for about fourteen dollars an hour for an on-demand instance and if your frugal and looking to crack passwords at scale you could use spot instances and lower the cost for a p2.16xlarge to < seven dollars an hour. The use of GPUs has lowered the time to crack password from years to days and from days to minutes and seconds. Most people know that using a long alphanumeric password which contains upper and lowercase letters, numbers and special characters is a good idea. It’s a good idea to avoid simple leet passwords like “H0use” because these sort of passwords provide little int the way of extra security. A little know fact is that the ability to use of a “:” in your password makes it significantly harder to crack, the reason is that password cracking tools like hashcat use the colon as a delimiter (the colon delimiter is linked to the Unix \etc\passwd file use of the colon to delimit fields) for the split function, so a colon confuses the password cracker. Unfortunately, the colon is a common delimiter, and not all systems will allow its use.

References

Amazon EC2 – P2 Instances. (n.d.). Retrieved March 12, 2017, from https://aws.amazon.com/ec2/instance-types/p2/

Dan Goodin – May 28, 2013 1:00 am UTC. (2013, May 27). Anatomy of a hack: How crackers ransack passwords like “qeadzcwrsfxv1331”. Retrieved March 12, 2017, from https://arstechnica.com/security/2013/05/how-crackers-make-minced-meat-out-of-your-passwords/2/

Gite, V. (2015, August 03). Understanding \etc\passwd File Format. Retrieved March 12, 2017, from https://www.cyberciti.biz/faq/understanding-etcpasswd-file-format/

GPU Password Cracking – Bruteforceing a Windows Password Using a Graphic Card. (2011, July 12). Retrieved March 12, 2017, from https://mytechencounters.wordpress.com/2011/04/03/gpu-password-cracking-crack-a-windows-password-using-a-graphic-card/

Hashcat advanced password recovery. (n.d.). Retrieved March 12, 2017, from https://hashcat.net/hashcat/

Mathiopoulos, I. (2016, October 05). Running hashcat in Amazon’s AWS new 16 GPU p2.16xlarge instance. Retrieved March 12, 2017, from https://medium.com/@iraklis/running-hashcat-in-amazons-aws-new-16-gpu-p2-16xlarge-instance-9963f607164c#.kcszxs1s5

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

Project 12: Cracking Linux Password Hashes with Hashcat (15 pts.). (n.d.). Retrieved March 12, 2017, from https://samsclass.info/123/proj10/p12-hashcat.htm

Spot Bid Advisor. (n.d.). Retrieved March 12, 2017, from https://aws.amazon.com/ec2/spot/bid-advisor/

FIT MGT5114 – Wk1 Discussion 2 Post

Question:

Why should you periodically change the key used to encrypt messages? What attack is more likely to succeed if a key has been used frequently? How frequently should the key be changed?

Response:

While breaking modern day encryption keys (e.g. – AES-256, RSA-1024, RSA-2048, RSA-4096) is improbable it is not impossible.  Many enterprise-class encryption systems leverage key management systems so that encryption key rotation can be accomplished without the massive burden of having to maintain and track the key pairs manually.  One such solution is keyAutority from Thales.  Key management systems are often used for encrypting data-at-rest on disk and tape.  As we learned in chapter two of the text, the initial exchange of keys is subject to a man-in-the-middle attack, but more importantly, it’s if a single key pair is used, the lack of a key rotation policy could and has created serious exposure.  In a word where developers are moving at an unprecedented pace, and cloud computing is providing easy access to infrastructure for developers, we are seeing all sorts of human error which is creating severe pain for many organization.  Most notably developers are publishing keys to GitHub and hackers are now crawling GitHub looking for AWS keys (the code to perform the crawling has even been published to GitHub).  AWS is a giant honeypot sitting on the internet and human error like publishing AWS keys to GitHub is a huge risk, a key management strategy is really important to ensuring that if a key gets into the wild you can minimize the potential impact.

The governance of how often an encryption key should be changed really depends on what the encryption key is used for.  IMO the complexity of key management and the value of the assets being protected need to be taken into consideration before deciding on a key management strategy.  Additionally, compliance with regulatory agencies needs to be considered when developing a key management strategy, compliance with regulations like and SEC 17a-4 and HIPPA are likely to seriously influence key management policies.

References

6 Jan 2015 at 13:02, Darren Pauli tweet_btn(). (n.d.). Dev put AWS keys on Github. Then BAD THINGS happened. Retrieved March 08, 2017, from https://www.theregister.co.uk/2015/01/06/dev_blunder_shows_github_crawling_with_keyslurping_bots/

Burton, H. (2017, January 10). TruffleHog: Hacker publishes secret key spotter to Github. Retrieved March 8, 2017, from http://www.theinquirer.net/inquirer/news/3002198/trufflehog-hacker-publishes-secret-key-spotted-to-github

Mimoso, M. (2014, June 19). Hacker Puts Hosting Service Code Spaces Out of Business. Retrieved March 08, 2017, from https://threatpost.com/hacker-puts-hosting-service-code-spaces-out-of-business/106761/

Pal, K. (2015, July 15). 10 Best Practices for Encryption Key Management and Data Security. Retrieved March 08, 2017, from https://www.techopedia.com/2/30767/security/10-best-practices-for-encryption-key-management-and-data-security

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

Townsend, P. (n.d.). HIPAA/HITECH Act – Encryption and Key Management Requirements. Retrieved March 08, 2017, from https://info.townsendsecurity.com/bid/38671/HIPAA-HITECH-Act-Encryption-and-Key-Management-Requirements

FIT MGT5114 – Wk1 Discussion 1 Post

Question:

Do you currently use, or have you used in the past, any computer security measures? If so, what do you use? If not, what measures would you consider using? What attacks are you trying to protect against?

Response:

I have been in the technology field for 25 years, a large portion of that time as a storage architect and software developer. I use technologies like firewalls (Cisco ASA, Palo Alto Networks, and tons of open source solutions like iptables, pfsense, monowall, smoothwall, opnsense, etc…) to secure external services, locking services down to allowed IP ranges and in some case specific origin IP addresses, opening specific protocols and ports, NATing, proxying and reverse-proxying traffic (NGINX) all in an effort to obfuscate and reduce the attack surface. I use protocols like ssh and sftp to encrypt communication between client and servers. I use MD5 hashes to quickly validate binaries (and other files) to ensure that the files have not been tampered with. I use more obscure technologies like port knocking to programmatically secure ports when they need to be exposed but establishing a VPN connection is overly cumbersome. I use NIDS (network intrusion detection systems) like Snort in combination with the ELK stack to gather data and perform analytics to identify threats. I use RSA keys and multi-factor authentication (MFA) every day for everything from ssh access using key pairs instead of password authentication to IPSec and OpenVPN VPN connections which require multi-factor authentication via RSA tokens, Google Authenticator, Duo, etc… I also use AES-256 encryption data-at-rest-encryption (D@RE) technologies, RAID and erasure coding which all protect data at rest.

I also use so many of the tools found on this site: http://sectools.org/  on a routine basis. I use Nmap and Wireshark almost daily for network and protocol analysis. I also routinely run scheduled vulnerability tests using a subscription service from Beyond Security to identify and alert on vulnerabilities on public facing web servers.  All of my Linux servers run Lynis daily to evaluate their security posture and publish reports which are sent to a system which parses the reports and produces an exception report which is distributed outlining any required remediation. I also am an aspiring ethical hacker who frequently uses Kali Linux and Pentoo Linux depending on what I am trying to do, Kali Linux is my go to, but Pentoo is nice for RF hacking. I am an avid watcher of Hak5, reader, and listener of 2600 and have been for many, many years.  I am the proud owner of a WiFi Pineapple, many homemade antennas, the USB Rubber Ducky and the HackRF One. 🙂

I am just scratching the surface here; it seems like I could go on forever but hopefully, this provides a reasonable level of detail and insight. Oh yeah, I use Anti-Virus (AV) software for that false sense of security but mostly just to slow my Windows desktop down. 🙂

Here are some of my objectives:

  1. Identify vulnerabilities and remediate before the bad guys do.
  2. Reduce my attack surface as much as possible.  Don’t want to be a honeypot on the internet, no point in enticing a script kiddie to target me. For instance, don’t allow your publicly accessible server to respond to ICMP so that when your neighbors inquisitive kid does a subnet scan he decides to target you, if you’re gonna stick a server directly on the internet and allow ssh don’t use port 22, don’t ever use WEP on your WiFi access point, etc… etc…
  3. I love to learn and research things (hence why I own a WiFi Pineapple, Rubber Ducky, HackRF One, etc…).  The more I know the better I can protect my assets.

Since this is a security class, I’ll leave you with one of my favorite websites:  http://map.norsecorp.com/#/

References

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

FIT MGT5014 – Wk8 Discussion Post

Why Systems Fail?

It has been said that most systems fail because system builders ignore organizational behavior problems. Discuss the implications of this statement.

 

Organizational behavior and culture can determine the success or failure of just about anything, information systems are not immune from this key risk variable.  The reason books like “How to Win Friends and Influence People by Dale Carnegie”, “To Sell Is Human: The Surprising Truth About Moving Others by Daniel H. Pink”, “The Paradox of Choice by Barry Schwatrz”, “Enchantment: The Art of Changing Hearts, Minds, and Actions by Guy Kawasaki” and many, many others exist is because as human beings we know success or failure is greatly influenced by our ability to influence others, to change behavior and culture.

 

Information systems are often built and designed by technologists who for years ignored the end-user, crafting systems they felt would optimize the business from their perspective but never considering that these systems we complex and while logical to them were illogical to the end-user. Over the past twenty of so years, we’ve witnessed the emergence of B2C (business to consumer) organizations which have eclipsed B2B (business to business) organizations in many aspects.  These B2C organizations like Apple, Facebook, Google, Snapchat to name a few, focus on the end-user, they use agile development paradigms vs. rigid waterfall paradigms to rapidly pivot to meet the demands of a fickle consumer base.  The difference today is there is no concept of shelfware, the idea that Oracle or SAP sells you an application which you may or may not implement, adoption is paramount in the B2C world, customers have far more choice, they test drive and pilot applications, and initial commitment is far lower.  The most successful information system initiative today have bottom-up support vs. top-down mandates.  I think about the shift to cloud computing from traditional on-premise infrastructure, and this movement was driven by developers looking to simplify the process and become agiler by removing the painful processes built by the IT guy.  Five years ago IT organizations called this Shadow IT and resisted, but these grassroots information systems (IaaS, PaaS, SaaS, FaaS, etc…) have been some of the most transformative in the last thirty years.  IT organizations are having to learn how to apply governance to information systems which are widely deployed, the realization here is that the end-user wants to drive the experience, they don’t want the experience dictated to them.  The power of the developers and end-users (The New Kingmakers by  Stephen O’Grady) has fostered a positive culture shift inside many IT organizations who sadly have been so predictable for so many years that SNL parodied them in the Nick Burns sketches (https://www.nbc.com/saturday-night-live/cast/jimmy-fallon-14931/character/nick-burns-17301).

 

References

 

Jimmy Fallon. (n.d.). Retrieved March 05, 2017, from https://www.nbc.com/saturday-night-live/cast/jimmy-fallon-14931/character/nick-burns-17301

 

Laudon, K. C., & Laudon, J. P. (2016). Management information systems: managing the digital firm. Boston: Pearson.

 

Tang, E. (2011, January 22). Why Do Information Systems Fail? And how can managers/ IT managers reduce the likelihood of such failures? Retrieved March 05, 2017, from https://erictang711.wordpress.com/2011/01/23/why-do-information-systems-fail-and-how-can-managers-it-managers-reduce-the-likelihood-of-such-failures/

 

Identify Solutions

Identify solutions that allowed Canada Life Insurance Corporation to correct the main gaps in the CIM system and the errors caused by the excessive decentralization of IT development services.

 

  • Canada Life Insurance over rotated on decentralization and recognized that not all steering activities could be decentralized, so some activities were centralized and made the responsibility of the Department of Actuarial Services for branches.
  • All change management was centralized under Ghislaine Boulliance, with the exception of code tables which would be controlled by the users.
  • A process was developed tracking changes requests as well as following-up on completed change requests.  This process ensures that change requests in the pipeline are appropriately prioritized and that one a change request is marked completed that there is a connection with the end-user to ensure the change is as expected, to take feedback and iterate if required.

 

Because Canada Life Insurance decided to outsource development and deployment of CIM to ITConsult they should have had developed a governance around exit management which would have outlined how ITConsult would transition post-development.  It’s implied that ITConsult’s departure left both a skills and culture gap that could have been avoided.

 

References

 

Laudon, K. C., & Laudon, J. P. (2016). Management information systems: managing the digital firm. Boston: Pearson.

 

Roy, V., & Aubert, B. (2006). The CIM Project. HEC Montreal Centre for case studies 14 pages,4(1). Retrieved March 5, 2017.

 

CIM Project Opinion

In your opinion, do you think the project was a success or a failure? Give your reason(s).

 

IMO the project was not a success.  Canada Life Insurance attempted to do too much with this project.  They were taking on the development and deployment of a transformative information system and at the same time attempting to shift their management approach for technology project.  This new management approach for IT projects seemed to be aimed at decentralizing decision making, increasing end-user involvement in how technology solutions were architected and deployed and moving from a traditional waterfall based project methodology to an agile or hybrid based project methodology.  Canada Life Insurance was just trying to do too much, they further compounded the issues by engaging ITConsult (outsourcer) for the development and deployment of CIM.  ITConsult ended up controlling the direction of the CIM project which negated most of what Canada Life Insurance was trying to accomplish and also introduced new issues around organizational behavior and knowledge management.

 

It seems that the application was prototyped but never tested for scale, a common issue with rapid prototyping (Laudon & Laudon 2015 p 523).  Once the CIM system went into production they experienced massive scale issues and over fifty change requests.

 

Canada Life Insurance worked to rectify the issues post-production deployment but the project at production roll-out was a failure.  CanLife should have taken a more phased approach to development and deployment addressing application requirements and organization behavior modifications using and approach that provided a higher probability of success.

 

References

 

Laudon, K. C., & Laudon, J. P. (2016). Management information systems: managing the digital firm. Boston: Pearson.

 

Roy, V., & Aubert, B. (2006). The CIM Project. HEC Montreal Centre for case studies 14 pages,4(1). Retrieved March 5, 2017.

 

FIT MGT5014 – Wk7 – e-Choupal Memo

[google-drive-embed url=”https://docs.google.com/document/d/1VPOIsN688WaUhSo8beUsTU7C5HbzyynKohUIFFGAflk/preview” title=”FIT – MGT5014 – Week 7 – e-Choupal Memo” icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”920″ style=”embed”]