Richard J. Bocchinfuso

"Be yourself; everyone else is already taken." – Oscar Wilde

FIT – MGT5114 – Week8 – Discussion 10

Discuss how having your personal information in online databases may lead to identity theft. How can you protect yourself from this?

Our personal data litters the Internet and as the digital world provides more convenience out digital footprint and the attack surface continues to grow. Most of us continue to trade convenience for security, so there is no end in sight. I could not help but think about the Wired editor who had his digital identity wiped when I read this question, if you haven’t read this story I highly recommend it. The ease with which a hacker can gain access to a single piece of information and use it as the catalyst to take over a person’s life is astounding. Vulnerabilities come in all forms but what is interesting about the Wired editor story is that the vulnerability was in the process and exploited via social engineering. The growth of Identity Theft Insurance demonstrates how real identity theft is. One protection approach is to limit what we store online, for instance when that little check-box pops up asking to save your credit card information don’t click it. More and more organizations are encrypting or hashing personal/private data that they store in online databases, but we still have to be careful. I used to use an expiring credit card designed for online purchasing; the system would generate a temporary credit card number with a credit limit equal to what I was going to purchase, this was a good system, but it became cumbersome, so I traded security for convenience. I try not to use the same passwords because if one online database is compromised, I don’t want to give the person with the data the keys to my kingdom by using the same password everywhere. It’s also important to recognize how important password length and complexity is, tools like hashcat and cloud computing have made cracking simple passwords a trivial and speedy task, what used to take years now takes minutes.

References

Honan, M. (2012, August 06). How Apple and Amazon Security Flaws Led to My Epic Hacking. Retrieved April 26, 2017, from https://www.wired.com/2012/08/apple-amazon-mat-honan-hacking/

Pascal, A. (2014, February 27). Online Identity Theft Statistics – And How to Protect Yourself. Retrieved April 26, 2017, from http://eggtoapples.com/blog/online-identity-theft-statistics-and-how-to-protect-yourself/

FIT – MGT5114 – Week8 – Discussion 9

Law and ethics are often both considerations when determining the reaction to a computer security incident. For instance, scanning for open wireless networks is not illegal unless the scanner connects to the network without permission. Discuss this issue in terms of the legal and ethical issues that surround using a wireless connection that you do not own.

Wireless network scanning is not unlawful, and the ethics should be determined by the intent of the individual doing the scanning. For instance, if the person is scanning the network to identify unsecured wireless access points or access points with weak encryption protocols like WEP so they can conduct an attack to gain unauthorized access it would clearly be unethical. Tools like aircrack-ng allow hackers to identify access points, obfuscate themselves, promiscuously collect packets and crack WEP keys and WPA passwords.

There is an argument that piggybacking on open wifi access points (APs) in not unethical. Those who argue this perspective state that some APs are intentionally left open so identifying wifi piggybacking on any open AP as unethical would be an incorrect assessment. Those who argue the unethical position state that it is unethical to cheat the ISP out of their revenue. There is the case of the man who was charged with a crime for using a cafe’s wifi by sitting outside and piggybacking from his car; this was deemed unlawful because the free wifi was intended for patrons, which he was not. I think the reality is that it’s hard to know if an open AP was left open intentionally or unintentionally, services like WiGLE provide data regarding “free” wif access points. With regards to the argument that it’s stealing from the ISP thus unethical, I would need to look at the ISPs terms of service. I know of many coffee shops who have residential class Internet service and provide “free” wifi to their patrons so it would seem that sharing you ISP connection via wifi is not illegal or unethical. We live in a connected world, and I think jumping on an open wifi AP has become a way of life thus moral intent is important when deciding if this behavior is ethical or unethical. No doubt this is a topic which is open to debate.

References

Cheng – May 22, 2007 3:37 pm UTC, J. (2007, May 22). Michigan man arrested for using cafe’s free WiFi from his car. Retrieved April 26, 2017, from https://arstechnica.com/tech-policy/2007/05/michigan-man-arrested-for-using-cafes-free-wifi-from-his-car/

Bangeman, E. – Jan 4, 2008 3:12 am UTC. (2008, January 03). The ethics of “stealing” a WiFi connection. Retrieved April 26, 2017, from https://arstechnica.com/security/2008/01/the-ethics-of-stealing-a-wifi-connection/

Pash, A. (2008, January 04). The Ethics of Wi-Fi “Stealing”. Retrieved April 26, 2017, from http://lifehacker.com/340716/the-ethics-of-wi-fi-stealing

Writer, L. G. (2011, September 10). Is it Legal to Piggyback WiFi? Retrieved April 26, 2017, from http://smallbusiness.chron.com/legal-piggyback-wifi-28287.html

FIT MGT5114 – Wk7 Discussion 1 Post

Security and risk are clearly related; the more at-risk a system or data set is the more security is desirable to protect it. Discuss how prices for security products may be tied to the degree of risk. That is, will people or organizations be willing to pay more if the risk is higher?

Absolutely, maybe, hmmm, what a complex world we live in. There is seemingly a direct correlation between, value of assets, reputation, etc… and the risk associated with a potential vulnerability, a successful exploit and what an organization is willing to pay to protect themselves. Some market segments make the decision to spend on security products clearer by imposing regulatory requirements that make the cost of non-compliance steep enough to mandate compliance.

For example:
Processing credit card transactions? You are subject to PCI DSS.
Do something regulated by the FDA? You are subject to Title 21 of the Code of Federal Regulations (21 CFR Part 11) Electronic Records
Do pretty much anything in health care? You are probably subject to Health Insurance Portability and Accountability Act (HIPAA) and The Health Information Technology for Economic and Clinical Health Act (HITECH) which means you better keep that patient data secure.

These regulations and other make the decision to invest in security products seemingly straightforward, but not everything is what it seems. Major breaches like Target who had 40 million credit and debit card records 70 million customer records (including addresses and phone numbers) lifted from their systems netted a loss of only 0.1% of their 2014 sales. The same is true of Home Depot who in 2014 had 56 million credit and debit card numbers and 53 million email addresses lifted from their systems which netted a loss of only 0.01% of their 2014 sales. These and many other firms have Cyber Liability Insurance to mitigate their losses, between payments from insurance and tax right offs the losses diminish, and so does the incentive to invest in security products.

When we look at sites like http://map.norsecorp.com/#/ that depict the velocity of attacks, and we think about the attack surface of an online entity the idea of “if there is a breach” probably should be replaced with “when there is a breach”. I would say there is some hedging occurring in the enterprise, where there is the balance between investments and projected losses due to a breach. No investment makes you hack proof, and if and when you are hacked having invested millions in technology to protect against a hack garners no reputation points so being smart about your security posture but not over investing and rolling the dice (it’s happening regardless) may be a prudent business decision. Stuxnet proved that even a facility which is off the grid is vulnerable to attack.

References

Data Breach & Cyber Liability Insurance. (n.d.). Retrieved April 19, 2017, from https://www.thehartford.com/data-breach-insurance

Michael Kassner | April 9, 2015, 12:45 PM PST. (n.d.). Data breaches may cost less than the security to prevent them. Retrieved April 19, 2017, from http://www.techrepublic.com/article/data-breaches-may-cost-less-than-the-security-to-prevent-them/

Staff, C. (2012, December 19). The security laws, regulations and guidelines directory. Retrieved April 19, 2017, from http://www.csoonline.com/article/2126072/compliance/compliance-the-security-laws-regulations-and-guidelines-directory.html#Electronic-Fund-Transfer

Zetter, K. (2014, November 03). An Unprecedented Look at Stuxnet, the World’s First Digital Weapon. Retrieved April 19, 2017, from https://www.wired.com/2014/11/countdown-to-zero-day-stuxnet/

FIT MGT5114 – Wk6 Discussion 1 Post

Discuss three possible inclusions in a security policy. How do they differ from those included in a business continuity plan?

“A security policy documents an organization’s security needs and priorities.” (Pfleeger, Pfleeger & Margulies, 2015, p. 671) “A security policy is a high-level statement of purpose.” (Pfleeger, Pfleeger & Margulies, 2015, p. 671) A security policy does not merely address a security posture from a technical perspective, such as identifying known vulnerabilities. A security policy is nuanced, having to take into consideration the assets which need to be protected, the value of these assets, potential regulatory concerns, etc… A security policy should consider the following:

  • Organizational goals.
  • Delegation of responsibility.
  • Organizational commitment.

While a security policy is a macro level statement of purpose, a security plan includes the security policy, but also includes details such as current state (the current security posture including gaps, likely the result of an assessment), requirements, recommendations, accountability (possibly in the form of a RACI matrix), timetable (project plan) and a maintenance plan focused on operational upkeep.

A “Business continuity plan documents how a business will continue to function during or after a computer security incident.” (Pfleeger, Pfleeger & Margulies, 2015, p. 681). “An ordinary security plan covers computer security during normal times (under normal operations) and deals with protecting against a wide range of vulnerabilities from usual sources.” (Pfleeger, Pfleeger & Margulies, 2015, p. 681). The text simply states that the difference between a security plan and a business continuity plan is that one is focused on establishing security guidelines that will be used during normal operations while the other is invoked by either a catastrophic failure or a prolonged outage which will negatively impact the business.

I would say that a security policy is part of business continuity plan (BCP), in other words, security policies exist inside the BCP plan. When a BCP plan is invoked due to a catastrophic event or prolonged outage, the goal of a business continuity plan is to have a playbook to return to normal operations under the worst of conditions, at which time security policies are reinstituted as part of the BCP plan. A security policy may also govern the execution of a business continuity or disaster recovery plan.

A final thought, this week’s discussion question seems to ask for a “security policy” to be contrasted with a “business continuity plan,” not a “security plan” with a “business continuity plan.” I hedged a bit with my response. 🙂

References

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

FIT MGT5114 – Wk5 Discussion 1 Post

Question:

Telecommunication network providers and users are concerned about the single point of failure in the “last mile”, which is the single cable from the network provider’s switching station to the customer’s premises. How can a customer protect against that single point of failure? Provide an analysis on whether this presents a good cost-benefit trade-off.

Response:

The obvious answer here is to have redundant providers, but redundant links alone do not provide redundancy. To truly be redundant the solution must incorporate transparent failover. This is no different that a blown electrical circuit in your home. If the freezer is connected to a circuit which blows the idea that an adjacent outlet is available to power the freezer is meaningless if your sleeping or on vacation. For a system to have no single point of failure, redundant infrastructure (the easy part) must exist, but these systems also need to be self-healing. This concept has prompted the emergence of a field called site reliability engineering; this field focuses on the self-healing aspects of information systems at scale. Consumers or SMBs looking to protect themselves from “last mile” failures via infrastructure redundancy might use a “dial-up” connection but probably not because who still has a POTS line? The more likely option is a router which will handle both and wireline broadband and wireless broadband connections. Devices like the Failsafe Gigabit N Router for Mobile Broadband from Cradlepoint provide a cost effective way to transparent circuit failover. Because most ingress and egress traffic is NAT’d on a consumer grade networks (e.g. – your home network) a move from one provider to another can be performed quickly and nondisruptively. NAT’d traffic moves between your LAN and the Internet using a single IP public address (typically a DHCP address assigned by your provider), this makes it reasonable to use this approach for redundancy.

My home network is fairly complex (some pics from my home lab) with two circuits and multiple site-to-site VPNs to cloud providers. Both my wireline circuits as well as my broadband circuit are Verizon circuits with one wireline circuit being a business grade and one being a consumer grade, I leverage wireless broadband as my tertiary Internet connection (used broadband for two weeks following Hurricane Sandy). The business circuit differs in speed from my consumer circuit, and the business circuit provides me with public facing IP space and the ability to use my own router vs. the Verison FiOS provided router, these are key differentiators between consumer circuits and business circuits. I use pfSense as my router and firewall or choice, pfSense manages all my routing and circuit failover. Because this is my home lab I do not use something like BGP to manage external traffic and allow for transparent failover, what I do is monitor my home lab circuits using a witness process which runs a check against two IP addresses. For simplicity, IP address 1 is the advertised static public facing IP address on my Verizon Business circuit, and IP 2 is the NAT’d port forwarded address on my consumer grade FiOS circuit. IP 1 maps to host.domainname and IP2 resolves to host.dyndns, where dyndns is Namecheap’s dynamic DNS service. When all is well the host is directly accessible via IP 1, if something goes wrong, the host will become available using IP 2. Obviously, the use of BGP and an AS number to facilitate failover for my home lab would be a bit costly, so the witness process watches for service availability on IP 1 or IP 2 and updates the DNS A record of the service with my domain registrar if the service becomes reachable on an alternate path. My DNS provider is Namecheap, so the witness server test the process for accessibility and then uses PyNamecheap to update the A record programmatically. With a short TTL, the DNS records propagate, and public services are again available, albeit not no all services will failover, but web services are available with a little help from NGINX and reverse proxying.

The above is not very expensive from a pure infrastructure perspective. The consulting may be a bit costly if you are not capable of configuring it yourself but the cost to build in redundancy is getting lower and lower. Cloud providers like AWS with services Route 53, S3 and Lambda make it very cost effective to leverage all of their site reliability engineering to build disaster tolerant systems cost effectively without every worrying about the physical infrastructure. Is the time, energy and money worth it and is there an ROI depends on what you are looking to accomplish and the value of the services you are providing. I require public IP address space, not offered on a Verizon FiOS consumer grade circuits; I need a consumer grade Verison FiOS line for TV, the internet, and telephone service. For these reasons, it made sense for me to leverage the consumer grade line as a backup to provide access to critical systems and services in the event of something like a physical fiber cut, which has happened with the landscaper putting a shovel through the fiber (there are two fiber runs from the street to my house).

References

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

FIT MGT5114 – Wk4 Discussion 1 Peer Response

Good post and you are certainly in the majority with your perspective regarding the existence of duplicate records in a database and the negative impact on DB integrity. My only issue with this question and the responses is the idea that duplicate “DB” records is primarily explored in the context of RDBMS. Professor Karadsheh mentions Big Data in a few response posts, Big Data and the emergence of NoSQL and Document Databases have challenged some of the concepts firmly rooted in legacy RDBMS best practices where relationships and table joins are foundational, and duplicate data typically presents a significant problem. At a high-level SQL database rely on structured data, tables with fields, normalized data inserted into these fields, relationships between tables and SQL statements to return results. It’s easy to see the pitfalls of a duplication in the context of an RDBMS. NoSQL or Document Databases use a key-value store paradigm, where keys and values are defined when unstructured, denormalized data is ingested. A good example of this is opening a stream from the Twitter API for something like sentiment analysis. I use this as an example because I am a heavy user of ElasticSearch (a NoSQL DB) for log and sentiment analysis. The benefit of NoSQL is the ability to ingest thousands of unstructured, denormalized records per second; these unstructured, denormalized records use key-value pairs to map the keys to data (value).

Here is an example use of ElasticSearch: A data stream is open using the Twitter API, the data stream is pushed into ElasticSearch and then Kibana is used to visualize sentiment. In this case, duplicate records don’t indicate that the that the integrity of the database is suspect, time series don’t matter, etc… What is important is the ability to stream of messages per seconds, use an NLP library to determine sentiment, create a JSON record containing key/value pairs and add to ElasticSearch.

ElasticSearch records look like this:  http://www.awesomescreenshot.com/image/2357496/22cb647c962eb32ee38e8ad8ee3c13d5
POTUS Sentiment Analysis using ElasticSearch and Kibana:  http://gotitsolutions.org/2017/02/24/potus-sentiment-analysis/

Like so many things I think the answer to this question in a context which defines DB as more than just RDBMS is, it depends. With that said I do agree that duplication in the context of traditional RDBMS can wreak havoc on data integrity.

References

Bocchinfuso, R. J. (2017, March 31). POTUS Sentiment Analysis. Retrieved April 02, 2017, from http://gotitsolutions.org/2017/02/24/potus-sentiment-analysis/

Issac, L. P. (2014, January 14). SQL vs NoSQL Database Differences Explained with few Example DB. Retrieved April 02, 2017, from http://www.thegeekstuff.com/2014/01/sql-vs-nosql-db/?utm_source=tuicool

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

FIT MGT5114 – Wk4 Discussion 1 Post

Question:

Can a database contain two identical records without a negative effect on the integrity of the database? Why or why not?

Response:

I think this can be a complex question and needs to be qualified a bit. Not all databases are relational, a database could be comprised of a single table and single field with multiple rows which contain binary responses, something like “agree or disagree” responses to a question for something like sentiment analysis. An example here would be a question posed on a website where users are asked to “agree or disagree”. The user’s response is stored in a database table, and a query is used to count the “agree and disagree” responses.

An example of this is represented by the following sql statements:

sqlite> — create table
sqlite> create table sentiment (answer text);
sqlite>
sqlite> — insert data into table
sqlite> insert into sentiment (answer) values (‘agree’);
sqlite> insert into sentiment (answer) values (‘agree’);
sqlite> insert into sentiment (answer) values (‘agree’);
sqlite> insert into sentiment (answer) values (‘agree’);
sqlite> insert into sentiment (answer) values (‘agree’);
sqlite> insert into sentiment (answer) values (‘agree’);
sqlite> insert into sentiment (answer) values (‘disagree’);
sqlite> insert into sentiment (answer) values (‘disagree’);
sqlite> insert into sentiment (answer) values (‘disagree’);
sqlite>
sqlite> — query all records
sqlite> select * from sentiment;
agree
agree
agree
agree
agree
agree
disagree
disagree
disagree
sqlite>
sqlite> — query total number of responses
sqlite> select count(answer) from sentiment;
9
sqlite> — query total number or agree responses
sqlite> select count(answer) from sentiment where answer = (‘agree’);
6
sqlite> — query total number or disagree responses
sqlite> select count(answer) from sentiment where answer = (‘disagree’);
3

Above a DB table called “sentiment” is created with one field “answer”. Inserts represent data being inserted into the table “sentiment” and field “answer” to create records. The data is then used to calculate the total number of respondents, the number of respondents that agree and the number or respondents that disagree.

This is a simple example of a DB which stores data which can be mined to gather sentiment regarding the question posed to the user. In the example above there were nine total respondents, six who agree and three who disagree. I this case duplicate records are acceptable and expected with the goal of recording all responses and tabulating a count of each response type, database integrity is not affected negatively.

When the question is asked in the context of a relational database (RDBMS) the assumption that we can make is that there are relationships which are created between tables and these relationships likely rely on a unique identifier (a primary key) to ensure that records can be uniquely identified. The confusion that can be created when duplicate records exist can be demonstrated by a SQL update.

sqlite> — example of poorly designed db table with duplicate records
sqlite>
sqlite> — create table foo
sqlite> create table foo (name text, age integer);
sqlite>
sqlite> — insert data into table
sqlite> insert into foo (name,age) values (‘John’,10);
sqlite> insert into foo (name,age) values (‘Joe’,20);
sqlite> insert into foo (name,age) values (‘Jane’,30);
sqlite>
sqlite> — query db table
sqlite> select * from foo;
John|10
Joe|20
Jane|30
sqlite> select * from foo where (name) = (‘John’);
John|10
sqlite>
sqlite> — create duplicate records
sqlite> insert into foo (name,age) values (‘John’,10);
sqlite> insert into foo (name,age) values (‘Joe’,20);
sqlite>
sqlite> — query db table
sqlite> select * from foo;
John|10
Joe|20
Jane|30
John|10
Joe|20
sqlite> select * from foo where (name) = (‘John’);
John|10
John|10
sqlite>
sqlite> — create new record;
sqlite> insert into foo (name,age) values (‘Bob’,30);
sqlite>
sqlite> — query db table
sqlite> select * from foo;
John|10
Joe|20
Jane|30
John|10
Joe|20
Bob|30
sqlite> select * from foo where (age) = (30);
Jane|30
Bob|30
sqlite>
sqlite> — update John’s age to 40
sqlite> update foo set (age) = (40) where (name) = (‘John’);
sqlite>
sqlite> — query db table
sqlite> select * from foo;
John|40
Joe|20
Jane|30
John|40
Joe|20
Bob|30
sqlite> select * from foo where (name) = (‘John’);
John|40
John|40
sqlite>

Above we can see a table called “foo” consisting of two fields “name” and “age” is created, this table has names and ages added to it, with the record “John | 10” and “Joe | 20” being duplicated. An update is made to the database to change John’s age from 10 to 40. This update impacts all John’s records because there is not a unique identifier in the record which can be used. While in the previous example where I stored information for sentiment analysis I showed that it is possible to have a database where integrity is not impacted by duplicate records, in general, this is a poor design choice and can be easily fixed with the addition of a primary key.

Below you will see the subtle but powerful difference that a primary key offers.

sqlite> — proprely designed db table avoids duplicate records
sqlite>
sqlite> — create table foo with autoincrementing primary key
sqlite> create table foo (id integer primary key autoincrement, name text, age integer);
sqlite>
sqlite> — insert data into table
sqlite> insert into foo (name,age) values (‘John’,10);
sqlite> insert into foo (name,age) values (‘Joe’,20);
sqlite> insert into foo (name,age) values (‘Jane’,30);
sqlite>
sqlite> — query db table
sqlite> select * from foo;
1|John|10
2|Joe|20
3|Jane|30
sqlite> select * from foo where (name) = (‘John’);
1|John|10
sqlite>
sqlite> — create duplicate records
sqlite> insert into foo (name,age) values (‘John’,10);
sqlite> insert into foo (name,age) values (‘Joe’,20);
sqlite>
sqlite> — query db table
sqlite> select * from foo;
1|John|10
2|Joe|20
3|Jane|30
4|John|10
5|Joe|20
sqlite> select * from foo where (name) = (‘John’);
1|John|10
4|John|10
sqlite>
sqlite> — create new record;
sqlite> insert into foo (name,age) values (‘Bob’,30);
sqlite>
sqlite> — query db table
sqlite> select * from foo;
1|John|10
2|Joe|20
3|Jane|30
4|John|10
5|Joe|20
6|Bob|30
sqlite> select * from foo where (age) = (30);
3|Jane|30
6|Bob|30
sqlite>
sqlite> — update John’s age to 40 where id = N
sqlite> update foo set (age) = (40) where (id) = (4);
sqlite>
sqlite> — query db table
sqlite> select * from foo;
1|John|10
2|Joe|20
3|Jane|30
4|John|40
5|Joe|20
6|Bob|30
sqlite> select * from foo where (name) = (‘John’);
1|John|10
4|John|40
sqlite>

I the above example additional field “id” is added as a primary key, this is not a user entered field but a field that is auto generated and guarantees that each record is unique and uniquely identifiable. This subtle design change allows the manipulation of a John with the ID = 4. While the sentiment example I gave works as is and there is no posed threat to data integrity, the addition of a unique id as a primary key would be a welcomed and desirable design change.

My apologies for all the SQL but I thought the best way to convey my thoughts would be to use examples. I think the easy answer here would have been just to say NO, that a database can NOT contain two identical records without a negative effect on the integrity of the database, but it think the answer is “it depends”. With this said I think it is a best practice to have a way to uniquely identify database records because the inability to manipulate data programmatically can create some serious issues. Additionally, schema extensions or changes are very difficult when a primary key does not exist. Finally the inability to uniquely identify records or elements greatly impacts the ability to apply security paradigms.

References

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

SQLite Home Page. (n.d.). Retrieved March 28, 2017, from https://www.sqlite.org/

SQL code used in above examples:

— example of poorly designed db table with duplicate records

— create table foo
create table foo (name text, age integer);

— insert data into table
insert into foo (name,age) values (‘John’,10);
insert into foo (name,age) values (‘Joe’,20);
insert into foo (name,age) values (‘Jane’,30);

— query db table
select * from foo;
select * from foo where (name) = (‘John’);

— create duplicate records
insert into foo (name,age) values (‘John’,10);
insert into foo (name,age) values (‘Joe’,20);

— query db table
select * from foo;
select * from foo where (name) = (‘John’);

— create new record;
insert into foo (name,age) values (‘Bob’,30);

— query db table
select * from foo;
select * from foo where (age) = (30);

— update John’s age to 40
update foo set (age) = (40) where (name) = (‘John’);

— query db table
select * from foo;
select * from foo where (name) = (‘John’);

— properly designed db table avoids duplicate records

— create table foo with autoincrementing primary key
create table foo (id integer primary key autoincrement, name text, age integer);

— insert data into table
insert into foo (name,age) values (‘John’,10);
insert into foo (name,age) values (‘Joe’,20);
insert into foo (name,age) values (‘Jane’,30);

— query db table
select * from foo;
select * from foo where (name) = (‘John’);

— create duplicate records
insert into foo (name,age) values (‘John’,10);
insert into foo (name,age) values (‘Joe’,20);

— query db table
select * from foo;
select * from foo where (name) = (‘John’);

— create new record;
insert into foo (name,age) values (‘Bob’,30);

— query db table
select * from foo;
select * from foo where (age) = (30);

— update John’s age to 40 where id = N
update foo set (age) = (40) where (id) = (4);

— query db table
select * from foo;
select * from foo where (name) = (‘John’);

— sentiment analysis

— create table
create table sentiment (answer text);

— insert data into table
insert into sentiment (answer) values (‘agree’);
insert into sentiment (answer) values (‘agree’);
insert into sentiment (answer) values (‘agree’);
insert into sentiment (answer) values (‘agree’);
insert into sentiment (answer) values (‘agree’);
insert into sentiment (answer) values (‘agree’);
insert into sentiment (answer) values (‘disagree’);
insert into sentiment (answer) values (‘disagree’);
insert into sentiment (answer) values (‘disagree’);

— query all records
select * from sentiment;

— query total number of responses
select count(answer) from sentiment;
— query total number or agree responses
select count(answer) from sentiment where (answer) = (‘agree’);
— query total number or disagree responses
select count(answer) from sentiment where (answer) = (‘disagree’);

FIT MGT5114 – Wk3 Discussion 1 Peer Response

I enjoyed reading your post, and I appreciate your comments on my post. Sometimes it’s easy to forget the tools I (we) use every day to protect information because we don’t trust broader access controls. I have been using tools like AxCrypt and VeraCrypt (previously TrueCrypt) for years to protect personal data, similar to Microsoft Bitlocker. My company used a full disk encryption for a while which required you to enter a password before booting your laptop; the idea was that all data on the hard drive was encrypted so If the laptop was lost or stolen someone could not pull the drive, connect to another machine and start perusing data. I hated the laptop encryption, it was a good concept, the software-based encryption slowed down the computer tremendously. Software-based encryption full volume encryption on a laptop just crushed I/O performance making it impractical. I think you bring up an excellent point regarding things like public drives and even network shares or other network-based technologies where we assume our data is secure, confidential and guaranteed authentic but it practice this is a bigger challenge than many realize. I work with organizations of varied sizes, from the Fortune Ten to SMB and I have always been amazed by the power of the IT guy/gal and how the desire for simplicity often gives way to massive security issues. Group shares like HR, legal, etc… and user shares in departments that should be highly confidential with root or administrative privileges removed so often are fully accessible by IT administrative users. It’s understandable why but no less concerning. The removal of root or administrative privileges greatly complicates tasks like backups and migrations, and these are tasks that IT organizations (the IT guys/gals) perform all the time and often lead to practices which create security holes. Granular user controllable permission which orchestrated from an API and a move toward guaranteed authenticity became popular with content-addressable storage (CAS) and today the properties of CAS, are part of object-based storage systems like Amazon (AWS) S3.

Let’s look at the following example:

The original file, iou.txt says the following: “John Doe owes Jane Smith $1,000.00”
Below you can see I create a file (set-content) with the contents above, I output the contents of the file (set-content), I display the file attributes (get-itemproperty) and then I hash the file (get-filehash). The file hash is very import.

PS D:\Downloads\week3> Set-Content .\iou.txt ‘John Doe owes Jane Smith $1,000.00’
PS D:\Downloads\week3> Get-Content .\iou.txt
John Doe owes Jane Smith $1,000.00
PS D:\Downloads\week3> Get-ItemProperty .\iou.txt | Format-List

Directory: D:\Downloads\week3

Name : iou.txt
Length : 36
CreationTime : 3/26/2017 5:55:46 PM
LastWriteTime : 3/26/2017 5:55:46 PM
LastAccessTime : 3/26/2017 5:55:46 PM
Mode : -a—-

PS D:\Downloads\week3> Get-FileHash .\iou.txt -Algorithm MD5 | Format-List

Algorithm : MD5
Hash : 17F6B6FB31AAEB1F37864667D87E527B
Path : D:\Downloads\week3\iou.txt

Now let’s compromise the file, let’s assume I am John Doe the IT guy with access to global administrative privileges. Let’s also consider that most people don’t take a hash of their files when they save them to ensure authenticity.

Below I overwrite the contents of iou.txt (set-content) to state that Jane now owes John $100,000 dollars, a pretty significant change.
I display the contents of iou.txt (get-content) to validate that the modification was made. I then display the file attributes (get-itemproperty), here you can see that the file size is the same, and the only attribute that changes is the LastWriteTime, significant attribute but we will make sure we set that to match the attribute before we tampered with the contents of the file.
Next is the hash of the file contents (get-filehash) which shows a different hash, this is a hash of the file contents, but remember that most people don’t hash their files and store the hash to guarantee authenticity. The hash is a powerful tool in determining authenticity.
Next, I set the CreationTime, LastWriteTime and LastAccessTime to ensure they match the original file.
Listing the file attributes again you can see now everything matches the original file, same name, file size, timestamps, etc…
The only things we have as evidence that the file was changed is the differing hash.

PS D:\Downloads\week3> Set-Content .\iou.txt ‘Jane Smith owes John Doe $100,000.’
PS D:\Downloads\week3> Get-Content .\iou.txt
Jane Smith owes John Doe $100,000.
PS D:\Downloads\week3> Get-ItemProperty .\iou.txt | Format-List

Directory: D:\Downloads\week3

Name : iou.txt
Length : 36
CreationTime : 3/26/2017 5:55:46 PM
LastWriteTime : 3/26/2017 6:08:28 PM
LastAccessTime : 3/26/2017 5:55:46 PM
Mode : -a—-

PS D:\Downloads\week3> Get-FileHash .\iou.txt -Algorithm MD5 | Format-List

Algorithm : MD5
Hash : FB86680C6A90402598A2A1E4A27AA278
Path : D:\Downloads\week3\iou.txt

PS D:\Downloads\week3> $(Get-Item iou.txt).creationtime=$(Get-Date “3/26/2017 5:55:46 PM”)
PS D:\Downloads\week3> $(Get-Item iou.txt).lastaccesstime=$(Get-Date “3/26/2017 5:55:46 PM “)
PS D:\Downloads\week3> $(Get-Item iou.txt).lastwritetime=$(Get-Date “3/26/2017 5:55:46 PM “)
PS D:\Downloads\week3> Get-Content .\iou.txt
Jane Smith owes John Doe $100,000.
PS D:\Downloads\week3> Get-ItemProperty .\iou.txt | Format-List

Directory: D:\Downloads\week3

Name : iou.txt
Length : 36
CreationTime : 3/26/2017 5:55:46 PM
LastWriteTime : 3/26/2017 5:55:46 PM
LastAccessTime : 3/26/2017 5:55:46 PM
Mode : -a—-

PS D:\Downloads\week3> Get-FileHash .\iou.txt -Algorithm MD5 | Format-List

Algorithm : MD5
Hash : FB86680C6A90402598A2A1E4A27AA278
Path : D:\Downloads\week3\iou.txt

Note:  All f the above example and commands were executed on a Windows host using PowerShell.

References:

Compliance: Governance, Authenticity and Availability. (n.d.). Retrieved March 26, 2017, from http://object-matrix.com/solutions/corporate/finance/compliance/

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall. Edited on 03/22/2017 at 07:35:PM EDT

FIT MGT5114 – Wk3 Discussion 1 Post

Question:

The traditional protection levels used by operating systems to protect files are read, write, and execute. What are some other possible levels that a user may wish to apply to files, folders, code, etc.? Justify your answers with examples.

Response:

File and folder permissions are governed slightly differently based on operating system type, but the constructs are the same. On Unix and other POSIX-compliant systems (Linux, Android, MacOS, Windows NTFS, etc…) file and folder permissions are managed using a user, group, others (or world) model.

For example:
foo.bar sticky bit | owner | group | world
foo.bar – | rwx | r-x | r-x (-rwxr-xr-x)

Files and folders can have permissions quickly set for Owner, Group and World by using the numeric value for the permission mask.
r (read) = 4
w (write) = 2
x (execute) = 1

To assign the file “foo.bar” the permission mask of:
owner = rwx
group = r-x
others = r-x
The command would be “chmod 755 foo.bar”

Unix based systems leverage three additional permission sticky bit, setuid and setgid.
When the setuid permission is set the user executing the file assumes the permissions of the file owner.
When the setgid permission is set the user executing the file is granted the permissions based on the group associated with the file.
When the sticky bit is set a file or directory can only be deleted by the file owner, directory owner or root.

These special permissions are set in the following fashion:
sticky bit = 1000
setgid = 2000
setuid = 4000

Same idea as setting file permissions to set the sticky bit on foo.bar with full permissions the command would be “chmod 1777 foo.bar. To setgid and setuid with rwx permissions for the owner and no read only permissions for the group and others the command would be “chmod 6744 foo.bar”.

Windows based systems follow a similar file and folder permissions construct at least on systems using the POSIX-compliant NTFS file system (most modern Windows OSes). Older Microsoft Operating Systems like MS-DOS (FAT16 file system) and Windows 95 (FAT32 file system) use file attributes (Read-Only or Read-Write) rather than a full permission systems.

Permission inheritance is an important concept, the setgid and setuid are use to facilitate inheritance, the application is slightly different on Windows Operating Systems, but the premise is the same.

Source code can be protected in various ways outside of just file permissions. One option is to compile the code making it executable but not readable. Compiled languages like C++ compile into machine code; these compiled binaries are not easily decompiled, another option is to use a bytecode compiler often used with interpreted languages like Python, Perl, Ruby, etc… Machine code needs to be compiled for specific architectures, for example, x86, x64 and ARM would require three separate binaries while bytecode compiled binaries would work across architectures. The downside with bytecode compiled binaries is that most of the source code is contained in the compiled binary making it far easier to decompile.

Daemons and like auditd provide the ability to maintain detailed audit trails on file access. Systems like Varonis provide the ability to audit and verify permissions to ensure that the proper permissions are assigned to files and folders.

Outside of file and folder permissions, there are application level permissions such as RDBMS permissions which determine how a user can interact with the RDBMS and the data it houses. Object store permissions like AWS S3 offer an authorization model which is similar to filesystem permissions, and these permissions are typically managed via API using standard authentication methods like OAuth2 and SAML token based authentication. NAC or Network Access Control is a system which controls network access and manages security posture. Revision Contol Systems like Git use Access Controls to protect source code, in the case of Git these ACLs are very similar to UNIX-based ACLs. Many systems today which leverage REST and SOAP APIs to access date use tokens and keys to authenticate users and grant rights. I just finished working on some code today (https://gist.github.com/rbocchinfuso/36f8c58eb93c4932ec4d31b6818b82e8) for a project which uses the Smartsheet API and token based authentication so that cells can be updated using a command from Slack. This code authenticates using a token contained in an unpublished config.inc.php file and allows fields in a Smartsheet to be toggled using a command similar to “ssUpdate rowID,columnID,state”. Token based authentication, in this case, can provide VIEWER, EDITOR, EDITOR_SHARE, ADMIN and OWNER (https://smartsheet-platform.github.io/api-docs/#authentication) privileges while being stateless and without requiring user and password authentication.

References

Pfleeger, C. P., Pfleeger, S. L., & Margulies, J. (2015). Security in computing (5th ed.). Upper Saddle River: Prentice Hall.

FIT MGT5114 – Wk2 Discussion 1 Peer Response

Response 1:

Good points.  To play devil’s advocate here do you think that the scenario you put forward regarding OS or firmware upgrades with older or unsupported devices is likely to increase the probability of introducing unintentional vulnerabilities?

Response 2:

I agree that is a “yes and no sort of question”.  I like your example about clicking on the “You will never believe what So and So Famous Person is doing now” because it highlights the idea that the user is experiencing an unexpected behavior and thus the probability of malicious activity is likely greater.  IMO the complexity here lies in determining if the unexpected behavior indicates a vulnerability or threat.