Richard J. Bocchinfuso

"Be yourself; everyone else is already taken." – Oscar Wilde

FIT – MGT 5115 – Wk 6 Discussion Post

Define social media and explain why these technologies are different from earlier manifestations of the web. 

The definition of “social media” that I liked the most broke down the components of the term and defined them individually before deriving a combined meaning.

  • “Social” refers to the social interaction of people bidirectionally sharing information with others.
  • “Media” refers to the form of communication, in this case, the internet vs. traditional forms of media such as television, radio, and newspapers.

Given this contextual understanding of the words “social” and “media,” we can now define “social media” as the use of the internet and internet-based platforms that allow people to share and consume information.  I will add to this definition that this sharing and consumption happens in a near-synchronous fashion.

Unlike traditional forms of media (e.g. – tv, radio, and print) where data is compiled, and the information presented to the consumer in what I will call a two-dimensional world. In the age of “social media” raw data may be shared by the user, and this data may be combined with other user data to create information which can be gleaned only through the aggregation multiple data points volunteered by social media users. The creation of this information, knowledge and alternate perspectives happens knowingly and unknowingly to the users who volunteered the data and metadata.

One of my all-time favorite examples of this was a website called Please Rob Me (now defunct, but still a great example). The use of social media, in particular, Twitter and FourSquare (check-in craze seems to be over but was hot a few years ago; another dead unicorn) check-in data is used to let bad guys know when you won’t be home so they can rob you unencumbered. This is a perfect example of how social media platforms take tangential data and leverage it to create new information.

When we contrast social media with early manifestations of the web, meaning the World Wide Web (and Gopher, can’t forget about Gopher), these were an alternate digital publishing platforms where the creator published information to the web. Early internet protocols like IRC built social communities and sub-cultures, but the data was transient, unlike social media which has turned transient 140 character snippets into information. Social media focuses on capturing the data and metadata (e.g. – geolocation data), the data provided by a single user is aggregated with other user data to determine things like sentimentstatistical inference, etc…

When we look at the difference between social media and traditional media (the early web was just a new delivery method for traditional media), with social media we opt-in to a system where marketing is cheaper, has greater reach and is targeted because of our endless desire to share so many things about ourselves. The quid pro quo created is the ability to interact and influence in a way not possible before the dawn of social media. The benefits to the marketers are obvious; we provide a continuous stream of data which they convert to information and pivot as required.

One of my favorite social media stories is the story of The Ritz-Carlton and Joshie the Giraffe (great read that highlights the power of social media).

I travel a quite a bit, and I write code for a living and for fun; I’ll spare you the details of an unpleasant recent travel situation, but I will share the social media story. Let’s just say I had a situation in a Hilton hotel which required me to check out of the hotel and move to another hotel. When I arrived home later that week traumatized, I called Hilton and filed a report online.  A week later there was no movement on the issue, it was like I was banging my head against the wall. At this point I decided to take to Twitter, my approach was two-fold, one, post a message hashtagging #Hilton, and two, write a Twitterbot that would look for tweets hashtagged with #Hilton and send a reply with a note and a picture of my #Hilton experience. Less than 24 hours later Hilton made restitution for my experience. The power of social media, but it cuts both ways.

My dialog with Hilton on Twitter:

Early manifestations of the web were unidirectional and asynchronous, near-synchronous communication protocols like IRC never really made it to the masses, and IRC was built around closed communities. There is talk that a shift may once again be underway, with social media groups giving way to smaller more target groups like Slack style communities.

The internet and the web are ever-evolving, the pace of innovation is increasing as are unicorn mortality rates. Twitter was the darling of social media four years ago and today they are seemingly embroiled in a sell or fizzle out scenario. No one even knows who Foursquare is anymore. Dare I say Facebook is for the elder generation, blogging seems to be dead, and the world seems to be hooked on pictures and filters (aka Snapchat). I don’t get the Snapchat revolution, but I am part of that elder generation who is still using Twitter, blogs, RSS, IRC, etc… I consider myself more a consumer of information than a sharer of information, I try not to share too much raw data and metadata, but we all do it. Social Media is everywhere; it’s not just the platforms we are all familiar with like Facebook, Twitter, Instagram, LinkedIn, Reddit, Snapchat, Pinterest, Google+, etc… but there are sites I frequent like Stack Overflow, GitHub, Hacker News, figshare, etc… that have changed the way we live and communicate.  The internet (web) experience today is no longer a way to publish and consume static digital content, it is a near-synchronous platform which delivers an immersive experience.

References

Bennett, S. (2012, July 13). Marketing 101 – Social Media vs Traditional Media [INFOGRAPHIC]. Retrieved October 04, 2017, from http://www.adweek.com/digital/social-vs-traditional-media-marketing/

Frost, A. (2016, April 03). How and Why to Create a Community With Slack. Retrieved October 04, 2017, from https://blog.bufferapp.com/slack-community

Hurn, C. (2012, May 17). Stuffed Giraffe Shows What Customer Service Is All About. Retrieved October 04, 2017, from http://www.huffingtonpost.com/chris-hurn/stuffed-giraffe-shows-wha_b_1524038.html

Nations, D. (n.d.). Serious Question: What Exactly Is Social Media? Retrieved October 04, 2017, from https://www.lifewire.com/what-is-social-media-explaining-the-big-trend-3486616

Turban, E., Volonino, L., & Wood, G. R. (2015). Information technology for management digital strategies for insight, action, and sustainable performance. New Jersey (Estados Unidos): Wiley.

FIT – MGT 5115 – Wk 4 Discussion Post

Explain how e-business processes improve productivity, efficiency, and competitive advantage for business organizations and the public sector (government and nonprofit organizations).

I think it it is important to define the “e” in e-business. The “e” stands for “electronic” and implies that the business is networked (i.e. – connected). E-businesses typically make heavy use of technologies such as the internet and electronic data interchange (EDI) to improve productivity, efficiency, and competitive advantage. An e-business applies technology and process to both external and internal business requirements but tends to apply more focus on internal processes as a means to improve productivity, efficiency, cost structures and competitive advantage.

E-businesses are connected businesses who leverage technology to operate effectively in a truly global economy.
These technologies might include:

  • Various wired, wireless and mobile networking technologies
  • APIs (application program interfaces) to streamline how applications talk to each other and exchange information
  • Collaboration and communication tools to like e-mail, Cisco WebEx, Cisco Spark, Slack, etc… are critical in a truly global economy
  • BI (business intelligence) tools
  • CRM (customer relationship management)
  • ERP (enterprise resource planning) systems
  • EDI (electronic data interchange)

These technologies allow organizations to better communicate both internally and externally using empirical data that is both accurate and relevant.
EDI extends this communication ecosystem by connecting disparate buyers, suppliers, and partners. In a B2B context, this helps organizations significantly improve operational efficiencies via the exchange of data that is critical to all stakeholders. This exchange of data can facilitate JIT (just-in-time) inventory strategies where buyers understand the inventory and lead time of suppliers, and suppliers understand potential demand. Exchange of information such as this allows both the buyer and the supplier to streamline their operations making them more efficient, increasing productivity and ultimately makes them more competitive in the market.

The implementation of e-business practices in the public sector can provide similar benefits to e-business practices in the private sector. While the private sector focuses heavily on B2B (business-to-business) applications as well as B2C (business-to-consumer) applications of technology the private sector may be willing to abandon a particular segment of the population if it does not align with their mission and/or vision. A good example of this is Amazon, if you don’t have an internet connected device you likely are not viewed by Amazon as their target market and Amazon is willing to forego you as a customer. Although the launch of brick and mortar Amazon stores, Amazon’s acquisition of Whole Foods and their recent partnership with Kohls might imply that Amazon wants to capture this segment of the market. The public sector has a responsibility to ensure that the services they are providing are available to everyone. The public sector has more of a B2C (business-to-consumer) and C2C (consumer-to-consumer) focus, and the use of websites, social media, and electronic communications has changed the way in which citizen access information, the information available to citizens and the information transparency. Public sector organizations capture a significant amount of data on citizens, or members and this data can now be better aggregated and analyzed allowing public sector organizations to serve their constituents more efficiently and cost-effectively. Public sector organizations have to contend which challenges that the private sector can afford to ignore, simply stating this is not a target market.

The lines are very blurry and becoming more blurred with each passing day.  Many technical disruptors are building both B2B and B2C platforms, data captured from both business partners as well as consumers make the platforms more robust and meaningful to all the stakeholders.  I think about the companies like Uber and their relationship with drivers and the riders.  The Uber to driver relationship is a B2B relationship while the relationship between Uber and the rider is a B2C relationship.  This same paradigm exists with Amazon and the Amazon marketplace, and there are many more examples.  What’s clear is that as we become more connected the productivity, efficiencies, new entrants, opportunities, etc… are exponential, not linear.

References

Bartels, A. (2000, October 30). The difference between e-business and e-commerce. Retrieved September 21, 2017, from https://www.computerworld.com/article/2588708/e-commerce/e-commerce-the-difference-between-e-business-and-e-commerce.html

Ripley, H. -. (2014, January 29). How e-business transforms public sector services in the UK. Retrieved September 21, 2017, from http://www.accaglobal.com/in/en/technical-activities/technical-resources-search/2014/january/How-e-business-transforms-public-sector-services.html

Turban, Efraim, et al. Information technology for management digital strategies for insight, action, and sustainable performance. New Jersey (Estados Unidos), Wiley, 2015.

FIT – MGT 5115 – Wk 3 Discussion Post

How does data quality impact business performance? Using your textbook as a resource, describe the functions of database technology, the differences between centralized and distributed database architecture, how data quality impacts performance, and the role of a master reference file in creating accurate and consistent data across the enterprise.

When poor data quality such as missing and/or erroneous data negatively impacts operations, it can cost organizations business, affecting revenues and profits.  Missing and/or erroneous data can affect current revenues and can frustrate customer and place an organizations reputation at risk putting both existing and future business at stake. Data quality issues can decrease efficiency and increase costs, lack of confidence in data integrity causes organizations to spend time and money on data validation and error correction activities.

The goal of a database is to store data in a structured way (maybe).  Two popular database architectures are SQL and NoSQL databases.  SQL or RDBMS (Relational Database Management Systems) are relational with a defined schema. NoSQL databases or document databases are often schemaless and rely on key-value pairs defined at ingest.  As you can imagine ingesting (or inserting) data into a specified schema makes managing data integrity easier than defining the key-value pairs at the time of ingest.

Centralized and distributed database architectures are quite intuitive.  Centralized database architectures centralize the storage and control of data while distributed database architectures allow data to be stored on edge devices such as laptops, tablets, and mobile devices or distributed using master/master, master/slave or parent/child relationships.  Centralized database architectures offer greater control of data quality and security because all data is stored in a single physical location thus adds, updates and deletes can be made in a supervised and orderly fashion.  Centralized database architectures also allow for better security. It is easier to control physical and logical access to a centralized architecture, and the attack surface is limited when contrasted with a distributed system.

Centralized and distributed database architectures each come with tradeoffs which should be considered when selecting an appropriate architecture.  With more and more processing being pushed to the edge (e.g. – mobile and IoT growth) and with ever increasing big data demands decentralized distributed databases like Apache Cassandra and RethinkDB are experiencing massive growth.  Centralized databases like Microsoft SQL Server, MariaDB and others are still very prominent, but even these centralized database players are trying to adapt their architectures to support distributed database architectures to capitalize on the big data revolution.

Master reference files provide a common point of reference and act as a single source truth for a given data entity. Data entities might include customer, product, supplier, employee or asset data. As a single source of truth, master reference files are used to feed data into enterprise systems and maintain data quality and integrity.

References

Buckler, Craig. “SQL vs NoSQL: The Differences — SitePoint.” SitePoint, SitePoint, 18 Sept. 2015, www.sitepoint.com/sql-vs-nosql-differences/. Accessed 14 Sept. 2017.

“Do You Know How Data Quality Impacts Your Business?” BackOffice Assicates, 23 July 23ADAD, resources.boaweb.com/backoffice-blog/do-you-know-how-data-quality-impacts-your-business. Accessed 14 Sept. 2017.

Turban, Efraim, et al. Information technology for management digital strategies for insight, action, and sustainable performance. New Jersey (Estados Unidos), Wiley, 2015.

FIT – MGT 5115 – Wk 2 Discussion Post

Why do managers and workers still struggle to find the information that they need to make decisions or take action despite advances in digital technology? That is, what causes data deficiencies?

Corporate infrastructure and decision support systems have evolved over decades. Over this same period, organizations have endured management changes, shifting priorities and differing perspectives on the role of IT. Data silos, lost or bypassed data, poorly designed interfaces, nonstandardized data formats and chronically in flux requirements further compound the natural system and organizational challenges brought on by progress,

In my opinion, organizations can begin to combat some of the corporate infrastructure and organizational behavior issues by having a clear vision and mission when it comes to information systems. Management changes will happen, but an organization that has a clear vision and mission regarding the value of data and information will stay focused on strategic objectives amidst regime change. Everyone within the organization should be viewed as a stakeholder and a benefactor. There is an education process that needs to take place; all the parties concerned need to have the right reaction to the blue button moment. Data silos, poor user interface design, etc… persist because a wrong choice is made when a blue button moment occurs. The ability to changes the future depends on the decisions we make now.

The wrong blue button moment:
System doesn’t love embedded images so here is a link: https://goo.gl/L1XDLk

Source: Alex Cowan – Getting Started: Agile Meets Design Thinking, University of Virginia

The right blue button moment:
System doesn’t love embedded images so here is a link: https://goo.gl/fFdjnh

Source:  Alex Cowan – Getting Started: Agile Meets Design Thinking, University of Virginia

I believe we are lowering the barrier to entry when it comes to how we transform data into information. For years the industry spent time trying to force data into a common data model for business intelligence (BI), this normalization process usually consisted of one or more ETL (extract, transform, load) jobs. These jobs were typically batched, and the end state was a normalized data set pushed into a relational database management system (RDBMS), the relational SQL database schema was rigid and comprised of tables consisting of columns, rows, and fields. We called these DSS (Decision Support Systems) data warehouses and data marts. Fast forward a few years and many of these information systems which leveraged historical data as the primary predictor of the future are pivoting towards NoSQL databases where key-value pairs have replaced SQL relationships. NoSQL information systems are meant for massive real-time ingest; these systems are being used to build data lakes. The ability to use key-value pairs removed the need for a rigid schema often removing the need for an ETL process. The field of Data Science, NoSQL platforms like Hadoop, applications like Elastic Search for indexing, Kibana for visualization and programming languages like RPythonOctave, etc… make capturing data and performing analytics easier than ever before. With the advent of public cloud and platforms AWS EMRAWS Data PipelineGoogle Cloud DataflowGoogle Cloud Dataproc and many of the issues like data silos, lost or bypassed data, poorly designed interfaces, and nonstandardized data formats are being addressed.  The technology is being adapted to address matters that have persisted for a very long time; these new technologies are streamlining processes, increasing the time to value and solving some of the issues mentioned above.

References

Data Lake vs. Data Warehouse: Is the warehouse going under the lake? (2016, July 22). Retrieved September 06, 2017, from https://www.dezyre.com/article/data-lake-vs-data-warehouse-is-the-warehouse-going-under-the-lake/283

NoSQL vs SQL- 4 Reasons Why NoSQL is better for Big Data applications. (2015, March 19). Retrieved September 06, 2017, from https://www.dezyre.com/article/nosql-vs-sql-4-reasons-why-nosql-is-better-for-big-data-applications/86

Turban, E., Volonino, L., & Wood, G. R. (2015). Information technology for management digital strategies for insight, action, and sustainable performance. New Jersey (Estados Unidos): Wiley.

FIT – MGT 5115 – Wk 1 Assignment

Explain how IT (Information Technology) impacts your career and the positive outlook for IS (Information Systems) management careers.

[google-drive-embed url=”https://docs.google.com/presentation/d/1I-oDP-ci5C4gNkcG4OftGp21OpLNmjxBAAPlmW52eRg/preview?usp=drivesdk” title=”FIT MGT 5115 Week 1 Presentetation” icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.presentation” width=”100%” height=”400″ style=”embed”]

FIT MGT5114 – Wk1 Discussion 1 Peer Response

I enjoyed reading your post. Long, complex passwords have become an essential security measure. I am an aspiring ethical hacker, and one of my hobbies is cracking hashed passwords. Ten years ago cracking a nine character upper and lower case alphanumeric password would have been highly improbable. Today you can grab an AWS p2.16xlarge instance for about fourteen dollars an hour for an on-demand instance and if your frugal and looking to crack passwords at scale you could use spot instances and lower the cost for a p2.16xlarge to < seven dollars an hour. The use of GPUs has lowered the time to crack password from years to days and from days to minutes and seconds. Most people know that using a long alphanumeric password which contains upper and lowercase letters, numbers and special characters is a good idea. It’s a good idea to avoid simple leet passwords like “H0use” because these sort of passwords provide little int the way of extra security. A little know fact is that the ability to use of a “:” in your password makes it significantly harder to crack, the reason is that password cracking tools like hashcat use the colon as a delimiter (the colon delimiter is linked to the Unix \etc\passwd file use of the colon to delimit fields) for the split function, so a colon confuses the password cracker. Unfortunately, the colon is a common delimiter, and not all systems will allow its use.

References

Amazon EC2 – P2 Instances. (n.d.). Retrieved March 12, 2017, from https://aws.amazon.com/ec2/instance-types/p2/

Dan Goodin – May 28, 2013 1:00 am UTC. (2013, May 27). Anatomy of a hack: How crackers ransack passwords like “qeadzcwrsfxv1331”. Retrieved March 12, 2017, from https://arstechnica.com/security/2013/05/how-crackers-make-minced-meat-out-of-your-passwords/2/

Gite, V. (2015, August 03). Understanding \etc\passwd File Format. Retrieved March 12, 2017, from https://www.cyberciti.biz/faq/understanding-etcpasswd-file-format/

GPU Password Cracking – Bruteforceing a Windows Password Using a Graphic Card. (2011, July 12). Retrieved March 12, 2017, from https://mytechencounters.wordpress.com/2011/04/03/gpu-password-cracking-crack-a-windows-password-using-a-graphic-card/

Hashcat advanced password recovery. (n.d.). Retrieved March 12, 2017, from https://hashcat.net/hashcat/

Mathiopoulos, I. (2016, October 05). Running hashcat in Amazon’s AWS new 16 GPU p2.16xlarge instance. Retrieved March 12, 2017, from https://medium.com/@iraklis/running-hashcat-in-amazons-aws-new-16-gpu-p2-16xlarge-instance-9963f607164c#.kcszxs1s5

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

Project 12: Cracking Linux Password Hashes with Hashcat (15 pts.). (n.d.). Retrieved March 12, 2017, from https://samsclass.info/123/proj10/p12-hashcat.htm

Spot Bid Advisor. (n.d.). Retrieved March 12, 2017, from https://aws.amazon.com/ec2/spot/bid-advisor/