Richard J. Bocchinfuso

"Be yourself; everyone else is already taken." – Oscar Wilde

FIT MGT5114 – Wk3 Discussion 1 Post

Question:

The traditional protection levels used by operating systems to protect files are read, write, and execute. What are some other possible levels that a user may wish to apply to files, folders, code, etc.? Justify your answers with examples.

Response:

File and folder permissions are governed slightly differently based on operating system type, but the constructs are the same. On Unix and other POSIX-compliant systems (Linux, Android, MacOS, Windows NTFS, etc…) file and folder permissions are managed using a user, group, others (or world) model.

For example:
foo.bar sticky bit | owner | group | world
foo.bar – | rwx | r-x | r-x (-rwxr-xr-x)

Files and folders can have permissions quickly set for Owner, Group and World by using the numeric value for the permission mask.
r (read) = 4
w (write) = 2
x (execute) = 1

To assign the file “foo.bar” the permission mask of:
owner = rwx
group = r-x
others = r-x
The command would be “chmod 755 foo.bar”

Unix based systems leverage three additional permission sticky bit, setuid and setgid.
When the setuid permission is set the user executing the file assumes the permissions of the file owner.
When the setgid permission is set the user executing the file is granted the permissions based on the group associated with the file.
When the sticky bit is set a file or directory can only be deleted by the file owner, directory owner or root.

These special permissions are set in the following fashion:
sticky bit = 1000
setgid = 2000
setuid = 4000

Same idea as setting file permissions to set the sticky bit on foo.bar with full permissions the command would be “chmod 1777 foo.bar. To setgid and setuid with rwx permissions for the owner and no read only permissions for the group and others the command would be “chmod 6744 foo.bar”.

Windows based systems follow a similar file and folder permissions construct at least on systems using the POSIX-compliant NTFS file system (most modern Windows OSes). Older Microsoft Operating Systems like MS-DOS (FAT16 file system) and Windows 95 (FAT32 file system) use file attributes (Read-Only or Read-Write) rather than a full permission systems.

Permission inheritance is an important concept, the setgid and setuid are use to facilitate inheritance, the application is slightly different on Windows Operating Systems, but the premise is the same.

Source code can be protected in various ways outside of just file permissions. One option is to compile the code making it executable but not readable. Compiled languages like C++ compile into machine code; these compiled binaries are not easily decompiled, another option is to use a bytecode compiler often used with interpreted languages like Python, Perl, Ruby, etc… Machine code needs to be compiled for specific architectures, for example, x86, x64 and ARM would require three separate binaries while bytecode compiled binaries would work across architectures. The downside with bytecode compiled binaries is that most of the source code is contained in the compiled binary making it far easier to decompile.

Daemons and like auditd provide the ability to maintain detailed audit trails on file access. Systems like Varonis provide the ability to audit and verify permissions to ensure that the proper permissions are assigned to files and folders.

Outside of file and folder permissions, there are application level permissions such as RDBMS permissions which determine how a user can interact with the RDBMS and the data it houses. Object store permissions like AWS S3 offer an authorization model which is similar to filesystem permissions, and these permissions are typically managed via API using standard authentication methods like OAuth2 and SAML token based authentication. NAC or Network Access Control is a system which controls network access and manages security posture. Revision Contol Systems like Git use Access Controls to protect source code, in the case of Git these ACLs are very similar to UNIX-based ACLs. Many systems today which leverage REST and SOAP APIs to access date use tokens and keys to authenticate users and grant rights. I just finished working on some code today (https://gist.github.com/rbocchinfuso/36f8c58eb93c4932ec4d31b6818b82e8) for a project which uses the Smartsheet API and token based authentication so that cells can be updated using a command from Slack. This code authenticates using a token contained in an unpublished config.inc.php file and allows fields in a Smartsheet to be toggled using a command similar to “ssUpdate rowID,columnID,state”. Token based authentication, in this case, can provide VIEWER, EDITOR, EDITOR_SHARE, ADMIN and OWNER (https://smartsheet-platform.github.io/api-docs/#authentication) privileges while being stateless and without requiring user and password authentication.

References

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

FIT MGT5114 – Wk2 Discussion 1 Peer Response

Response 1:

Good points.  To play devil’s advocate here do you think that the scenario you put forward regarding OS or firmware upgrades with older or unsupported devices is likely to increase the probability of introducing unintentional vulnerabilities?

Response 2:

I agree that is a “yes and no sort of question”.  I like your example about clicking on the “You will never believe what So and So Famous Person is doing now” because it highlights the idea that the user is experiencing an unexpected behavior and thus the probability of malicious activity is likely greater.  IMO the complexity here lies in determining if the unexpected behavior indicates a vulnerability or threat.

FIT MGT5114 – Wk2 Discussion 1 Post

Question:

Is unexpected behavior in a computer program necessarily a vulnerability? Why or why not?

Response:

According to Pfleeger, Pfleeger & Margulies (2015), programming flaws can cause integrity problems which lead to harmful output or action, and these programming flaws offer an opportunity for exploitation by a malicious actor (p. 162). Agreed, but I believe the question is, does this imply that unexpected behavior is always a function of a programming flaw and if there is a programming flaw, has it created a vulnerability which can be exploited? I think this is a hard question to answer without a deeper more refined definition of “unexpected behavior”. I am sure many remember the first Basic program they every wrote, something like:

10 print “Name”
20 goto 10
run

The addition of a trailing semi-colon and spaces between Name and the trailing quote on line ten (10 print “Name     “;) will alter the output, ten trailing spaces produce output that is different from twenty trailing spaces, and while the behavior may be unexpected, it does not indicate a vulnerability.

Most modern programming languages have constructs to trap exceptions. Constructs like the try/catch/ finally attempt to trap exceptions and hopefully exit the condition gracefully logging the error.

try {
execute code }
catch (error) {
log error if try thows an execption }
finally {
cleanup
}

Many modern applications leverage these constructs, but it’s certainly possible to deliver working code which contains no exception handling at all. There is an abundance of code in the wild that is highly vulnerable for a myriad of reason, ranging from bad programming to situations that were never considered and thus not addressed. Legacy systems like programmable logic controllers (PLCs) running code from a time when the world was not connected, and security was not a concern contain some serious vulnerabilities.

Agile and DevOps movements have dramatically accelerated the frequency of software releases. It’s common practice to release software containing known and/or documented defects which are identified during testing cycles but not flagged as show stoppers, meaning the release cycle continues. These defects are not vulnerabilities but rather known bugs, typically with documented workarounds, essentially nondesirable expected behavior rather than unexpected behavior. Shorter release cycles are accompanied by an increase in unexpected behavior and offset by rigorous version control, A/B testing and automation which automates the rollback to known good system. Systems fail faster today, and rollbacks happen even more quickly. There is irony here, systems which have life or death implications have slow (very slow) release cycles (e.g. – it’s hard to do frequent software releases and tolerate known defects when talking about the heart-lung machine). These systems tend to be arcane and often vulnerable because they were never architected to live in the connected world, they value predictability and stability over functionality.

Exception handling along with verbose logging and the creation of audit trails have become standard practices. In the days of top-down systems is was easy for the developer to own the user experience but the dawn of event-driven made this much harder and logging is now a critical aspect of every system. The focus of many security firms is no longer to keep those exploiting vulnerabilities out but rather to keep them in, find them and determine what they are trying to do (http://www.mercurynews.com/2015/02/10/keeping-hackers-out-no-longer-the-best-security-strategy-fireeye-says/).

References

A New Avenue of Attack: Event-driven system vulnerabilities. (n.d.). Retrieved March 15, 2017, from http://www.isg.rhul.ac.uk/~simos/event_demo/

Error Handling. (n.d.). Retrieved March 15, 2017, from https://www.owasp.org/index.php/Error_Handling

Manifesto for Agile Software Development. (n.d.). Retrieved March 15, 2017, from http://agilemanifesto.org/

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

Press, A. (2016, August 12). Keeping hackers out no longer the best security strategy, FireEye says. Retrieved March 15, 2017, from http://www.mercurynews.com/2015/02/10/keeping-hackers-out-no-longer-the-best-security-strategy-fireeye-says/

FIT MGT5114 – Wk1 Discussion 2 Peer Response

I enjoyed reading your post. Many hobbyist black hat hackers and script kiddies are pretty lazy. I understand the concept of never connecting to a Starbucks WiFi to avoid a man-in-the-middle (MitM) attack it probably a bit unreasonable. Security is often like locking the front door of our home or our car door when we leave it unattended, and these are merely deterrents because most thieves are pretty lazy, they will walk around the mall pulling door handles until one opens so the deterrent is quite effective. The same idea often applies to network security, if we use something like a VPN to encrypt communication while far from unhackable it’s likely enough to have the individual perpetrating the man-in-the-middle attack pass over us and look for easier prey. In a world where our lives are conducted online, my personal philosophy is to lock the door of the house, and the car when I leave it unattended but never leaving the house or the car unattended is probably unreasonable.

If you’ve never seen how easy it is to conduct a MitM attack here is a good instructional video (https://www.youtube.com/watch?v=IdhuX4BEK6s) that shows how to use the WiFi Pineapple to carry out a MitM attack. Scary simple. 🙂 It’s much harder to crack the encrypted captured data so if someone is sitting at a Starbucks with a WiFi Pinnable conducting a MitM attack with either a rogue access point or by spoofing an access point (evil twin AP) the probability of the perpetrator spending the time to decrypt the data is low. A MitM attacker will likely move on to someone else who is passing unencrypted traffic.

References

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

FIT MGT5114 – Wk1 Discussion 1 Peer Response

I enjoyed reading your post. Long, complex passwords have become an essential security measure. I am an aspiring ethical hacker, and one of my hobbies is cracking hashed passwords. Ten years ago cracking a nine character upper and lower case alphanumeric password would have been highly improbable. Today you can grab an AWS p2.16xlarge instance for about fourteen dollars an hour for an on-demand instance and if your frugal and looking to crack passwords at scale you could use spot instances and lower the cost for a p2.16xlarge to < seven dollars an hour. The use of GPUs has lowered the time to crack password from years to days and from days to minutes and seconds. Most people know that using a long alphanumeric password which contains upper and lowercase letters, numbers and special characters is a good idea. It’s a good idea to avoid simple leet passwords like “H0use” because these sort of passwords provide little int the way of extra security. A little know fact is that the ability to use of a “:” in your password makes it significantly harder to crack, the reason is that password cracking tools like hashcat use the colon as a delimiter (the colon delimiter is linked to the Unix \etc\passwd file use of the colon to delimit fields) for the split function, so a colon confuses the password cracker. Unfortunately, the colon is a common delimiter, and not all systems will allow its use.

References

Amazon EC2 – P2 Instances. (n.d.). Retrieved March 12, 2017, from https://aws.amazon.com/ec2/instance-types/p2/

Dan Goodin – May 28, 2013 1:00 am UTC. (2013, May 27). Anatomy of a hack: How crackers ransack passwords like “qeadzcwrsfxv1331”. Retrieved March 12, 2017, from https://arstechnica.com/security/2013/05/how-crackers-make-minced-meat-out-of-your-passwords/2/

Gite, V. (2015, August 03). Understanding \etc\passwd File Format. Retrieved March 12, 2017, from https://www.cyberciti.biz/faq/understanding-etcpasswd-file-format/

GPU Password Cracking – Bruteforceing a Windows Password Using a Graphic Card. (2011, July 12). Retrieved March 12, 2017, from https://mytechencounters.wordpress.com/2011/04/03/gpu-password-cracking-crack-a-windows-password-using-a-graphic-card/

Hashcat advanced password recovery. (n.d.). Retrieved March 12, 2017, from https://hashcat.net/hashcat/

Mathiopoulos, I. (2016, October 05). Running hashcat in Amazon’s AWS new 16 GPU p2.16xlarge instance. Retrieved March 12, 2017, from https://medium.com/@iraklis/running-hashcat-in-amazons-aws-new-16-gpu-p2-16xlarge-instance-9963f607164c#.kcszxs1s5

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

Project 12: Cracking Linux Password Hashes with Hashcat (15 pts.). (n.d.). Retrieved March 12, 2017, from https://samsclass.info/123/proj10/p12-hashcat.htm

Spot Bid Advisor. (n.d.). Retrieved March 12, 2017, from https://aws.amazon.com/ec2/spot/bid-advisor/

FIT MGT5114 – Wk1 Discussion 2 Post

Question:

Why should you periodically change the key used to encrypt messages? What attack is more likely to succeed if a key has been used frequently? How frequently should the key be changed?

Response:

While breaking modern day encryption keys (e.g. – AES-256, RSA-1024, RSA-2048, RSA-4096) is improbable it is not impossible.  Many enterprise-class encryption systems leverage key management systems so that encryption key rotation can be accomplished without the massive burden of having to maintain and track the key pairs manually.  One such solution is keyAutority from Thales.  Key management systems are often used for encrypting data-at-rest on disk and tape.  As we learned in chapter two of the text, the initial exchange of keys is subject to a man-in-the-middle attack, but more importantly, it’s if a single key pair is used, the lack of a key rotation policy could and has created serious exposure.  In a word where developers are moving at an unprecedented pace, and cloud computing is providing easy access to infrastructure for developers, we are seeing all sorts of human error which is creating severe pain for many organization.  Most notably developers are publishing keys to GitHub and hackers are now crawling GitHub looking for AWS keys (the code to perform the crawling has even been published to GitHub).  AWS is a giant honeypot sitting on the internet and human error like publishing AWS keys to GitHub is a huge risk, a key management strategy is really important to ensuring that if a key gets into the wild you can minimize the potential impact.

The governance of how often an encryption key should be changed really depends on what the encryption key is used for.  IMO the complexity of key management and the value of the assets being protected need to be taken into consideration before deciding on a key management strategy.  Additionally, compliance with regulatory agencies needs to be considered when developing a key management strategy, compliance with regulations like and SEC 17a-4 and HIPPA are likely to seriously influence key management policies.

References

6 Jan 2015 at 13:02, Darren Pauli tweet_btn(). (n.d.). Dev put AWS keys on Github. Then BAD THINGS happened. Retrieved March 08, 2017, from https://www.theregister.co.uk/2015/01/06/dev_blunder_shows_github_crawling_with_keyslurping_bots/

Burton, H. (2017, January 10). TruffleHog: Hacker publishes secret key spotter to Github. Retrieved March 8, 2017, from http://www.theinquirer.net/inquirer/news/3002198/trufflehog-hacker-publishes-secret-key-spotted-to-github

Mimoso, M. (2014, June 19). Hacker Puts Hosting Service Code Spaces Out of Business. Retrieved March 08, 2017, from https://threatpost.com/hacker-puts-hosting-service-code-spaces-out-of-business/106761/

Pal, K. (2015, July 15). 10 Best Practices for Encryption Key Management and Data Security. Retrieved March 08, 2017, from https://www.techopedia.com/2/30767/security/10-best-practices-for-encryption-key-management-and-data-security

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

Townsend, P. (n.d.). HIPAA/HITECH Act – Encryption and Key Management Requirements. Retrieved March 08, 2017, from https://info.townsendsecurity.com/bid/38671/HIPAA-HITECH-Act-Encryption-and-Key-Management-Requirements

FIT MGT5114 – Wk1 Discussion 1 Post

Question:

Do you currently use, or have you used in the past, any computer security measures? If so, what do you use? If not, what measures would you consider using? What attacks are you trying to protect against?

Response:

I have been in the technology field for 25 years, a large portion of that time as a storage architect and software developer. I use technologies like firewalls (Cisco ASA, Palo Alto Networks, and tons of open source solutions like iptables, pfsense, monowall, smoothwall, opnsense, etc…) to secure external services, locking services down to allowed IP ranges and in some case specific origin IP addresses, opening specific protocols and ports, NATing, proxying and reverse-proxying traffic (NGINX) all in an effort to obfuscate and reduce the attack surface. I use protocols like ssh and sftp to encrypt communication between client and servers. I use MD5 hashes to quickly validate binaries (and other files) to ensure that the files have not been tampered with. I use more obscure technologies like port knocking to programmatically secure ports when they need to be exposed but establishing a VPN connection is overly cumbersome. I use NIDS (network intrusion detection systems) like Snort in combination with the ELK stack to gather data and perform analytics to identify threats. I use RSA keys and multi-factor authentication (MFA) every day for everything from ssh access using key pairs instead of password authentication to IPSec and OpenVPN VPN connections which require multi-factor authentication via RSA tokens, Google Authenticator, Duo, etc… I also use AES-256 encryption data-at-rest-encryption (D@RE) technologies, RAID and erasure coding which all protect data at rest.

I also use so many of the tools found on this site: http://sectools.org/  on a routine basis. I use Nmap and Wireshark almost daily for network and protocol analysis. I also routinely run scheduled vulnerability tests using a subscription service from Beyond Security to identify and alert on vulnerabilities on public facing web servers.  All of my Linux servers run Lynis daily to evaluate their security posture and publish reports which are sent to a system which parses the reports and produces an exception report which is distributed outlining any required remediation. I also am an aspiring ethical hacker who frequently uses Kali Linux and Pentoo Linux depending on what I am trying to do, Kali Linux is my go to, but Pentoo is nice for RF hacking. I am an avid watcher of Hak5, reader, and listener of 2600 and have been for many, many years.  I am the proud owner of a WiFi Pineapple, many homemade antennas, the USB Rubber Ducky and the HackRF One. 🙂

I am just scratching the surface here; it seems like I could go on forever but hopefully, this provides a reasonable level of detail and insight. Oh yeah, I use Anti-Virus (AV) software for that false sense of security but mostly just to slow my Windows desktop down. 🙂

Here are some of my objectives:

  1. Identify vulnerabilities and remediate before the bad guys do.
  2. Reduce my attack surface as much as possible.  Don’t want to be a honeypot on the internet, no point in enticing a script kiddie to target me. For instance, don’t allow your publicly accessible server to respond to ICMP so that when your neighbors inquisitive kid does a subnet scan he decides to target you, if you’re gonna stick a server directly on the internet and allow ssh don’t use port 22, don’t ever use WEP on your WiFi access point, etc… etc…
  3. I love to learn and research things (hence why I own a WiFi Pineapple, Rubber Ducky, HackRF One, etc…).  The more I know the better I can protect my assets.

Since this is a security class, I’ll leave you with one of my favorite websites:  http://map.norsecorp.com/#/

References

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

FIT MGT5014 – Wk8 Discussion Post

Why Systems Fail?

It has been said that most systems fail because system builders ignore organizational behavior problems. Discuss the implications of this statement.

 

Organizational behavior and culture can determine the success or failure of just about anything, information systems are not immune from this key risk variable.  The reason books like “How to Win Friends and Influence People by Dale Carnegie”, “To Sell Is Human: The Surprising Truth About Moving Others by Daniel H. Pink”, “The Paradox of Choice by Barry Schwatrz”, “Enchantment: The Art of Changing Hearts, Minds, and Actions by Guy Kawasaki” and many, many others exist is because as human beings we know success or failure is greatly influenced by our ability to influence others, to change behavior and culture.

 

Information systems are often built and designed by technologists who for years ignored the end-user, crafting systems they felt would optimize the business from their perspective but never considering that these systems we complex and while logical to them were illogical to the end-user. Over the past twenty of so years, we’ve witnessed the emergence of B2C (business to consumer) organizations which have eclipsed B2B (business to business) organizations in many aspects.  These B2C organizations like Apple, Facebook, Google, Snapchat to name a few, focus on the end-user, they use agile development paradigms vs. rigid waterfall paradigms to rapidly pivot to meet the demands of a fickle consumer base.  The difference today is there is no concept of shelfware, the idea that Oracle or SAP sells you an application which you may or may not implement, adoption is paramount in the B2C world, customers have far more choice, they test drive and pilot applications, and initial commitment is far lower.  The most successful information system initiative today have bottom-up support vs. top-down mandates.  I think about the shift to cloud computing from traditional on-premise infrastructure, and this movement was driven by developers looking to simplify the process and become agiler by removing the painful processes built by the IT guy.  Five years ago IT organizations called this Shadow IT and resisted, but these grassroots information systems (IaaS, PaaS, SaaS, FaaS, etc…) have been some of the most transformative in the last thirty years.  IT organizations are having to learn how to apply governance to information systems which are widely deployed, the realization here is that the end-user wants to drive the experience, they don’t want the experience dictated to them.  The power of the developers and end-users (The New Kingmakers by  Stephen O’Grady) has fostered a positive culture shift inside many IT organizations who sadly have been so predictable for so many years that SNL parodied them in the Nick Burns sketches (https://www.nbc.com/saturday-night-live/cast/jimmy-fallon-14931/character/nick-burns-17301).

 

References

 

Jimmy Fallon. (n.d.). Retrieved March 05, 2017, from https://www.nbc.com/saturday-night-live/cast/jimmy-fallon-14931/character/nick-burns-17301

 

Laudon, K. C., & Laudon, J. P. (2016). Management information systems: managing the digital firm. Boston: Pearson.

 

Tang, E. (2011, January 22). Why Do Information Systems Fail? And how can managers/ IT managers reduce the likelihood of such failures? Retrieved March 05, 2017, from https://erictang711.wordpress.com/2011/01/23/why-do-information-systems-fail-and-how-can-managers-it-managers-reduce-the-likelihood-of-such-failures/

 

Identify Solutions

Identify solutions that allowed Canada Life Insurance Corporation to correct the main gaps in the CIM system and the errors caused by the excessive decentralization of IT development services.

 

  • Canada Life Insurance over rotated on decentralization and recognized that not all steering activities could be decentralized, so some activities were centralized and made the responsibility of the Department of Actuarial Services for branches.
  • All change management was centralized under Ghislaine Boulliance, with the exception of code tables which would be controlled by the users.
  • A process was developed tracking changes requests as well as following-up on completed change requests.  This process ensures that change requests in the pipeline are appropriately prioritized and that one a change request is marked completed that there is a connection with the end-user to ensure the change is as expected, to take feedback and iterate if required.

 

Because Canada Life Insurance decided to outsource development and deployment of CIM to ITConsult they should have had developed a governance around exit management which would have outlined how ITConsult would transition post-development.  It’s implied that ITConsult’s departure left both a skills and culture gap that could have been avoided.

 

References

 

Laudon, K. C., & Laudon, J. P. (2016). Management information systems: managing the digital firm. Boston: Pearson.

 

Roy, V., & Aubert, B. (2006). The CIM Project. HEC Montreal Centre for case studies 14 pages,4(1). Retrieved March 5, 2017.

 

CIM Project Opinion

In your opinion, do you think the project was a success or a failure? Give your reason(s).

 

IMO the project was not a success.  Canada Life Insurance attempted to do too much with this project.  They were taking on the development and deployment of a transformative information system and at the same time attempting to shift their management approach for technology project.  This new management approach for IT projects seemed to be aimed at decentralizing decision making, increasing end-user involvement in how technology solutions were architected and deployed and moving from a traditional waterfall based project methodology to an agile or hybrid based project methodology.  Canada Life Insurance was just trying to do too much, they further compounded the issues by engaging ITConsult (outsourcer) for the development and deployment of CIM.  ITConsult ended up controlling the direction of the CIM project which negated most of what Canada Life Insurance was trying to accomplish and also introduced new issues around organizational behavior and knowledge management.

 

It seems that the application was prototyped but never tested for scale, a common issue with rapid prototyping (Laudon & Laudon 2015 p 523).  Once the CIM system went into production they experienced massive scale issues and over fifty change requests.

 

Canada Life Insurance worked to rectify the issues post-production deployment but the project at production roll-out was a failure.  CanLife should have taken a more phased approach to development and deployment addressing application requirements and organization behavior modifications using and approach that provided a higher probability of success.

 

References

 

Laudon, K. C., & Laudon, J. P. (2016). Management information systems: managing the digital firm. Boston: Pearson.

 

Roy, V., & Aubert, B. (2006). The CIM Project. HEC Montreal Centre for case studies 14 pages,4(1). Retrieved March 5, 2017.

 

FIT MGT5014 – Wk7 – e-Choupal Memo

[google-drive-embed url=”https://docs.google.com/document/d/1VPOIsN688WaUhSo8beUsTU7C5HbzyynKohUIFFGAflk/preview” title=”FIT – MGT5014 – Week 7 – e-Choupal Memo” icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”920″ style=”embed”]

FIT MGT5014 – Wk7 Discussion Post

The Internet may not make corporations obsolete, but the corporations will have to change their business models. Do you agree? Why or why not?

 

The internet and internet business models have already made some pretty sizable corporations obsolete.  Remember Palm, how about Blockbuster Video, Kodak, Garmin, etc…, etc….  General technology advances hurt these companies and forward thinking internet connected business models produced by companies like Apple, Amazon, and Netflix essentially made it impossible for these legacy businesses to compete.  The world is a bit different today than it was when these titans virtually evaporated.  New internet-based business models, but the Internet goliaths like Amazon are still eating legacy businesses unable to adapt to new Internet led business models, retailers like Borders and Circuit City are just a few of the retailers who have bid farewell at the hands of Amazon.  Massive legacy retailers like Walmart are also feeling the pain and trying to make up ground by acquiring companies like Jet.com to help them morph their brick and mortar businesses into efficient E-tailers.  The internet has changed the game completely with paths to revenue, marketing, customer service, customer acquisition, and scale strategies all very different than they were for brick and mortar businesses.  The biggest challenge corporations have to overcome is selling themselves too hard on their “unique” value proposition without adapting, convincing themselves they can carry on like they always have and survive.  In a wired world, people are shopping 24x7x365, and Black Friday has been replaced by Cyber Monday.  While Black Friday only lagging behind Cyber Monday by approximately one hundred million dollars in online sales, shoppers spent 12.1% more online in 2016 than they did in 2015 with retail store foot traffic dropping 10.4%.  “Corporations” will not be obsolete but the companies we consider cultural icons like Sears and General Motors will continue to be challenged by new entrants like Amazon and Tesla.

 

I agree 110% that legacy businesses who have been slow to adapt to a connected world will be very challenged to sustain and grow their businesses.  Corporations who consider themselves as data-driven organizations need to think about the data driving their companies.  Last quarter’s sales figures while important do not depict consumer sentiment, for instance, the sentiment analysis using the #GrabYourWallet hashtag may provide some valuable indicators on how a retailer should adjust their inventory, etc…  Many legacy corporations are mining historical structured data as an indicator of the future because they haven’t yet made the shift to what IDC calls the 3rd Platform (http://www.idc.com/prodserv/3rd-platform/), this lag is natural, but every organization needs a 3rd platform strategy because it represents the future.

 

References

 

Chronicle, B. E. (n.d.). How ‘Amazon factor’ killed retailers like Borders, Circuit City. Retrieved February 22, 2017, from http://www.sfgate.com/business/article/How-Amazon-factor-killed-retailers-like-6378619.php

 

Laudon, K. C., & Laudon, J. P. (2016). Management information systems: managing the digital firm. Boston: Pearson.

 

Male, B. (2009, December 11). 21 Things That Became Obsolete This Decade. Retrieved February 22, 2017, from http://www.businessinsider.com/21-things-that-became-obsolete-this-decade-2009-12?op=1%2F#undaries-20

 

MSG Management  Study  Guide. (n.d.). Retrieved February 22, 2017, from http://www.managementstudyguide.com/impact-of-internet-revolution-in-business.htm

 

O’Neill, M. (2011, March 01). How Netflix Bankrupted And Destroyed Blockbuster [INFOGRAPHIC]. Retrieved February 22, 2017, from http://www.businessinsider.com/how-netflix-bankrupted-and-destroyed-blockbuster-infographic-2011-3

 

Stelter, B. (2013, November 06). Internet Kills the Video Store. Retrieved February 22, 2017, from http://www.nytimes.com/2013/11/07/business/media/internet-kills-the-video-store.html

 

Taylor, K. (2017, February 01). An anti-Trump movement is calling for the boycott of these 33 retailers. Retrieved February 22, 2017, from http://www.businessinsider.com/trump-boycott-retailers-sell-trump-products-2017-1

 

Walmart Agrees to Acquire Jet.com, One of the Fastest Growing e-Commerce Companies in the U.S. (n.d.). Retrieved February 22, 2017, from http://news.walmart.com/2016/08/08/walmart-agrees-to-acquire-jetcom-one-of-the-fastest-growing-e-commerce-companies-in-the-us

 

Should all companies use Facebook and Twitter for customer service and advertising? Why or why not? What kinds of companies are best suited to use these platforms?

 

This is an interesting and somewhat loaded question.  I think the key question that every corporation needs to answer here with regards to should they have a social media presence (Facebook, Twitter, Instagram, etc…), is:  will a social media presence positively or negatively impact the business? I am not just talking about should an organization worry about a positive or negative reaction (tweet) from the social community, it’s about more than this.  Organizations need to consider if they can accurately represent who they are in the social sphere, do they have the time and inclination to curate content which is representative of their brand?  Having NO twitter presence is probably better than a twitter profile with one tweet from two years ago that says “Hello”.  From a customer service perspective, it’s hard to imagine that a company would have a social presence and not have a clear understanding of how social media is impacting the organization, I am implying that you can’t decouple specific aspects of social media.  Once you have a social media presence IMO you’re in the social media customer service business by default.  Of course, a company can attempt to take the censorship approach, but this usually doesn’t go well, and this organization is probably better served steering clear of social media.  Corporations who leverage social media today also need to leverage the vast amounts of data which can be mined from social media platforms, this data is not a straightforward as sales figures neatly organized into tables, columns, rows, and fields but it can be incredibly powerful for performing predictive analytics.  Many organizations who do this successfully outsource the analytics and machine learning because while social media is important to their business, they don’t possess the internal expertise to execute.  Other organizations who consider social media a path to competitive advantage may choose to directly mine this data as they perceive it as core to their business.  With 42% of customers who complain expecting an answer in sixty minutes or less you better have a (ro)bot leveraging machine learning algorithms responding to customers because a human response is probably not realistic given the scale of twitter.

 

I do believe that most companies need to explore social media for customer service and advertising.  Social media can be a great way to drive customer intimacy (Laudon & Laudon 2015 p 361) but it can also be an excellent way to alienate customers.  Social media is an incredible platform, but companies should clearly understand their audience, they should have a clear vision of how they want to present their brand.  By engaging in a social media initiative, organizations need to understand that they are making a commitment because engagement followed by abandonment can be harmful.

 

References

 

Baer, J. (n.d.). 42 Percent of Consumers Complaining in Social Media Expect 60 Minute Response Time. Retrieved February 22, 2017, from http://www.convinceandconvert.com/social-media-research/42-percent-of-consumers-complaining-in-social-media-expect-60-minute-response-time/

 

Gunelius, S. (n.d.). 10 Laws of Social Media Marketing. Retrieved February 22, 2017, from https://www.entrepreneur.com/article/218160

 

Laudon, K. C., & Laudon, J. P. (2016). Management information systems: managing the digital firm. Boston: Pearson.

 

Newton, C. (2016, November 01). Twitter introduces customer service bots in direct messages. Retrieved February 22, 2017, from http://www.theverge.com/2016/11/1/13488472/twitter-dm-welcome-messages-quick-replies-customer-service

 

Pangan, A. (2014, October 23). Social Media Analytics: Is It Worth Outsourcing? Retrieved February 22, 2017, from http://www.infinitdatum.com/blog/social-media-analytics-is-it-worth-outsourcing/

 

Parrish, C. (2015, July 08). How To Use Social Media To Market Your Business. Retrieved February 22, 2017, from https://www.fastcompany.com/3047232/ask-the-experts/how-to-use-social-media-to-market-your-business

 

Using Social Media Analysis for a Competitive Advantage. (2015, January 06). Retrieved February 22, 2017, from http://www.fathomdelivers.com/blog/social-media/using-social-media-analysis-competitive-advantage/