Richard J. Bocchinfuso

"Be yourself; everyone else is already taken." – Oscar Wilde

FIT – MGT5157 – Week 3

The submissions for this assignment are posts in the assignment’s discussion. Below are the discussion posts for Richard Bocchinfuso, or you can view the full discussion.

Discussion: Describe the differences between IPv6 and IPv4. What implications does it have on networks? On the user? What could be done to speed up the transition process?

First let’s talk about a major catalyst for the development and adoption of IPv6, the idea that the internet would exhaust the available IP address space. This prediction was made back in 2011 and it was stated the Internet would exhaust all available IP addresses by 4 AM on February 2, 2011. (Kessler, 2011) Here we are 2725 days later and the “IPcalypse” or “ARPAgeddon” has yet to happen, in-fact @IPv4Countdown (Links to an external site.)Links to an external site. is still foreshadowing the IPv4 doomsday scenarios via twitter. So what is the deal? Well, it’s true the available IPv4 address space is limited and with a pool of addresses of slightly less than 4.3 billion (2^32, more on this later) (Links to an external site.)Links to an external site.. It is important to remember that many of these predictions predate Al Gore taking credit for creating the internet. Sorry Bob Kahn and Vint Cert (Links to an external site.)Links to an external site., it was Al Gore who made this happen.

Back in the 1990s we didn’t have visibility to technologies like CIDR (Classless Interdomain Routing) (Links to an external site.)Links to an external site. and NAT (Network Address Translation) (Links to an external site.)Links to an external site.. In addition many us today use techniques like reverse proxying and proxy ARPing. Simplistically this allows something like NGINX (Links to an external site.)Links to an external site. to act as a proxy (middleman) where all services can be placed on a single port behind a single public IP address and traffic can be appropriately routed and proxied using a single public IP address.

For example, a snippet of an NGINX reverse proxy config might look something like this:

server {
    listen 80;
    server_name site.foo.com;
    location / {
        access_log on;
        client_max_body_size 500 M;
        proxy_pass http: //INTERNAL_HOSTNAME_OR_IP;
            proxy_set_header X - Real - IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X - Forwarded - For $proxy_add_x_forwarded_for;
    }
}

server {
    listen 80;
    server_name site.bar.com;
    location / {
        access_log on;
        client_max_body_size 500 M;
        proxy_pass http: //INTERNAL_HOSTNAME_OR_IP;
            proxy_set_header X - Real - IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X - Forwarded - For $proxy_add_x_forwarded_for;
    }
}

Let’s assume that there are two DNS A records (Links to an external site.)Links to an external site., one for site.bar.com and one for site.foo.bar that both point to the same IP address with a web server running on port 80 on both machines.  How does site.bar.com know to go to web server A and site.foo.bar know to go to web server B? The answer is a reverse proxy which can proxy the request, this is what we see above.

I use this configuration from two sites which I host bocchinfuso.net and gotitsolutions.org

A dig (domain information groper) of both of these domains reveals that their A records point to the same IP address, the NGINX (Links to an external site.)Links to an external site. reverse proxy does the work to route to the proper server or services based on the requested server name and proxies the traffic back to the client. nslookup would work as well if you would like to try but dig a little cleaner display for posting below.

$ dig bocchinfuso.net A +short
173.63.111.136
$ dig gotitsolutions.org A +short
173.63.111.136

NGINX (Links to an external site.)Links to an external site. is a popular web server, which can also be used for reverse proxying like I am using it above, as well as load-balancing.

IPv6 (Internet Protocol version 6) is the next generation or successor to IPv4 (Intenet Protocol version 4). IPv4 is responsible for assigning a numerical address using four octets which are each 8-bits to comprise a 32-bit address. IPv4 addresses are comprised of 4 numbers between 0 and 255.

Source:  ZeusDB. (2015, July 30). Understanding IP Addresses – IPv4 vs IPv6.

IPv6 addresses consist of eight x 16-bit segments to comprise a 128-bit address, giving IPv6 a total address space of 2^128 (~ 340.3 undecillion) (Links to an external site.)Links to an external site. which is a pretty big address space. To put 2^128 into perspective it is enough available IP address space for every person on the planet to personally have 2^95 or about 39.6 octillion IP addresses (Links to an external site.)Links to an external site.. That’s a lot of IP address space.

Source:  ZeusDB. (2015, July 30). Understanding IP Addresses – IPv4 vs IPv6.

One of the challenges with IPv6 is that it is not easily interchangeable with IPv4, this has slowed adoption and with the use of proxy, tunneling, etc. technology I believe the sense of urgency is not what it once was. IPv6 adoption has been slow, but with the rapid adoption of IoT and the number of devices being brought online we could begin to see a significant increase in the IPv6 adoption rate. In 2002 Cisco forecasted that IPv6 would be fully adopted by 2007.

Source:  Pingdom. (2009, March 06). A crisis in the making: Only 4% of the Internet supports IPv6.

The Internet Society State of IPv6 Deployment 2017 paper states that ~ 9 million domains and 23% of networks are advertising IPv6 connectivity. When we look at the adoption of IPv6 I think this table does a nice job outlining the where IPv4 and IPv6 sit relative to each other.

Source:  Internet Society. (2017, May 25). State of IPv6 Deployment 2017.

The move to IPv6 will be nearly invisible from a user perspective, our carriers (cable modems, cellular devices, etc…) abstract us from the underpinnings of how things work. Our request to google.com will magically resolve to an IPv6 address vs an IPv4 address and it won’t matter to the user.

For example here is a dig of google.com to return google[dot]com’s IPv4 and IPv6 address.

$ dig google.com A google.com AAAA +short
172.217.3.46
2607:f8b0:4004:80e::200e

Note: If you’re a Linux user you know how to use dig, MacOS should have dig and if you’re on Windows and don’t already know how to get access to dig the easier path can be found here: https://www.danesparza.net/2011/05/using-the-dig-dns-tool-on-windows-7/ (Links to an external site.)Links to an external site.

The adoption rate if IPv^ could be increased by simplifying interoperability between IPv4 and IPv6. The exhaustion of the IPv4 address space and the exponential increase in connected devices is upon us and this may be the catalyst the industry needs to simplify interoperability and speed adoption.

With the above said, interestingly IPv6 adoption is slowing.

McCarthy, K. (2018, May 22). IPv6 growth is slowing and no one knows why. Let’s see if El Reg can address what’s going on.

I think it’s a chicken or the egg situation.  There have been IPv4 address space concerns for years, the heavy lift required to adopt IPv6 led to slow and low adoption rates which pushed innovation in a different direction. With the use of a reverse proxy maybe I don’t need any more public address space, etc… Only time will tell, but this is foundational infrastructure akin to the interstate highway system, change will be a long journey and it’s possible we will start to build new infrastructure before we ever reach the destination.

 

References

Hogg, S. (2015, September 22). ARIN Finally Runs Out of IPv4 Addresses. Retrieved July 20, 2018, from https://www.networkworld.com/article/2985340/ipv6/arin-finally-runs-out-of-ipv4-addresses.html

Internet Society. (2017, May 25). State of IPv6 Deployment 2017. Retrieved July 20, 2018, from https://www.internetsociety.org/resources/doc/2017/state-of-ipv6-deployment-2017/

Kessler, S. (2011, January 22). The Internet Is Running Out of Space…Kind Of. Retrieved July 20, 2018, from https://mashable.com/2011/01/22/the-internet-is-running-out-of-space-kind-of/#49ZaFObrqPqW

McCarthy, K. (2018, May 22). IPv6 growth is slowing and no one knows why. Let’s see if El Reg can address what’s going on. Retrieved July 20, 2018, from https://www.theregister.co.uk/2018/05/21/ipv6_growth_is_slowing_and_no_one_knows_why/

NGINX. (2018, July 20). High Performance Load Balancer, Web Server, & Reverse Proxy. Retrieved July 20, 2018, from https://www.nginx.com/

Pingdom. (2009, March 06). A crisis in the making: Only 4% of the Internet supports IPv6. Retrieved July 20, 2018, from https://royal.pingdom.com/2009/03/06/a-crisis-in-the-making-only-4-of-the-internet-supports-ipv6/

Pingdom. (2017, August 22). Tongue twister: The number of possible IPv6 addresses read out loud. Retrieved July 20, 2018, from https://royal.pingdom.com/2009/05/26/the-number-of-possible-ipv6-addresses-read-out-loud/

Wigmore, I. (2009, January 14). IPv6 addresses – how many is that in numbers? Retrieved July 20, 2018, from https://itknowledgeexchange.techtarget.com/whatis/ipv6-addresses-how-many-is-that-in-numbers/

ZeusDB. (2015, July 30). Understanding IP Addresses – IPv4 vs IPv6. Retrieved July 20, 2018, from https://www.zeusdb.com/blog/understanding-ip-addresses-ipv4-vs-ipv6/

Yacine, NAT certainly has helped ease the IPv4 address space issue, as did other things like proxy ARPing (Links to an external site.)Links to an external site. and reverse proxying (Links to an external site.)Links to an external site., all techniques to use less address space (also pretty important for network security).

arping can be a handy little tool to see if you can contact a system and what MAC address it is arping on.

> arp-ping.exe -s 0.0.0.0 192.168.30.15
Reply that B8:CA:3A:D1:7E:AB is 192.168.30.15 in 4.604ms
Reply that B8:CA:3A:D1:7E:AB is 192.168.30.15 in 15.745ms
Reply that B8:CA:3A:D1:7E:AB is 192.168.30.15 in 15.642ms
Reply that B8:CA:3A:D1:7E:AB is 192.168.30.15 in 15.623ms

While IPv6 may provide a ton of IP address space, I don’t think the use of NAT and poxies will change, these techniques are as much about security as they are extending the address space.

James, love the profile pic.  Setting a hard date to kill IPv4 is a stick no carrot.  The IPv6 shift discussion needs to be driven by the market makers, they should make it compelling enough for enterprises to begin moving faster.  The market makers can make a huge impact, Netflix accounts for > 1/3 of all internet traffic (Links to an external site.)Links to an external site., people a rushing to AWS, Azure and GCP at alarming rates and the only procurers of tech that really matter are Amazon, Apple, Facebook, Alphabet, Microsoft, Tencent and Alibaba.  If the market makers move everyone else will follow, they will have no choice.  Why aren’t they moving faster?

This is further compounded by the fact that Cisco, Juniper, Arista or any other mainstream networking equipment provider are not mentioned above.  It’s no secret that Amazon, Facebook, and others are running their own intellectual property to solve lots of legacy networking issues.  Facebook is building and deploying their own switches and load balancers (Links to an external site.)Links to an external site. and AWS wrote their own networking stack because VPC needs could not be handled by traditional networking provider VLANs and overlay networks.  Now we are seeing the adoption of SDN (Links to an external site.)Links to an external site. increase which could speed up IPv6 adoption of could slow it down.

Assignment

[google-drive-embed url=”https://docs.google.com/document/d/1cLkTRQEDEoD6v49Ywu7Jkarc5T4FE-ggc0Mc91KG6H8/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5157 – Week 3 – Assignment 3″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

FIT – MGT5157 – Week 2

The submissions for this assignment are posts in the assignment’s discussion. Below are the discussion posts for Richard Bocchinfuso, or you can view the full discussion.

from 2.3 Discussion

Jul 13, 2018 10:48pm

Richard Bocchinfuso

What is the reason behind packet loss? What can protocols do to limit packet loss? Are there tools available for providers and consumers that identify the source of packet loss?

What is the reason behind packet loss?

“A primary cause of packet loss is the finite size of the buffer involved”  (Wu & Irwin, 2017, p. 15)

Link congestion can cause packet loss. This is where one of the devices in the packet’s path has no room in the buffer to queue the packet, so the packet has to be discarded. Increasing available bandwidth can be a resolution to link congestion, this allows buffers can empty quicker to reduce or eliminate queuing.  The use of QoS to prioritize traffic like voice and video can lower the probability of a dropped packet for traffic that does not tolerate packet loss and retransmission.

Bandwidth constraints, too much data on too small of a pipe creates congestion and packet loss.

As you can see above congestion is just like this is like a four-lane road merging into one lane road. Packet loss can be an intentional thing where packets are dropped because a rule is in place to drop packets at a certain limit, hosting providers use this to control how customers use available bandwidth. Packet loss can also just occur because of unintentional congestion, where the traffic just exceeds the available bandwidth.

Device performance can also cause packet loss. This occurs in a situation where you may increase the bandwidth of the route the packet will take, but the device (router, switch, firewall, etc…) is not able to handle the load. In this case, a new device is likely required to support the network load.

For example, a Cisco ASA 5505 (Links to an external site.)Links to an external site. is meant to handle 150 Mbps of throughput, if the device is will likely begin to have issues, maybe the CPU of the device can’t process the throughput and then device experiences congestion and begins dropping packets.

Faulty hardware, software or misconfiguration. Issues can occur here from a faulty component, like an SFP (small form-factor pluggable), a cable, a bug in the device software, or a configuration issue like a duplex mismatch, can cause packet loss.

Examples of software issues which have caused packet loss: (Links to an external site.)Links to an external site.

Network attacks like a Denial of Service (DoS) attack can result in packets being dropped because the attack is overwhelming a device with traffic.

What can protocols do to limit packet loss?

TCP (Transmission Control Protocol) is a connection-oriented protocol which is built to detect packet loss and to retransmit data. The protocol itself is built to handle packet loss.

UDP (User Datagram Protocol) is a connectionless protocol that will not detect packet loss and will not retransmit. We see UDP used for streaming connect like stock ticker data, video feeds, etc… UDP is often used in conjunction with multicast where data is transmitted one-to-many or many-to-many. You can probably visualize the use cases here, and how packet loss can impact the user experience. With UDP data is lost rather than the system experiencing slow or less than optimal response times.

Layer 4 Transport Optimizations

  • RIP (Routing Information Protocol) and BGP (Border Gateway Protocol) make routing (pathing) decisions based on paths, policies, and rules.
  • TCP Proxy and TFO (Traffic Flow Optimization)
  • Compression
  • DRE (Data Redundancy Elimination):  A technique used to reduce traffic by removing redundant data transmission.  This can be extremely useful for chatty protocols like CIFS (SMB).

Layer 2 and 3 Optimizations

  • OSPF (Open Shortest Path First) and IS-IS (Intermediate System to Intermediate System) use link state routing (LSR) algorithms to determine the best route (path).
  • EIGRP (Enhanced Interior Gateway Routing Protocol) is an advanced distance-vectoring routing protocol to automate network routing decisions.
  • Network Segmentation and QoS (Quality of Service): Network congestion is a common cause of packet loss, network segmentation and QoS can ensure that the right traffic is given priority on the network while less critical traffic is dropped.

Are there tools available for providers and consumers that identify the source of packet loss?

There are no hard and fast rules for detecting packet loss on a network but there are tools and an approach that can be followed.

Some tools I use for diagnosis and troubleshooting:

 

References

Bocchinfuso, R. (2008, January 15). Fs Cisco Event V6 Rjb. Retrieved July 13, 2018, from https://www.slideshare.net/rbocchinfuso/fs-cisco-event-v6-rjb

Hurley, M. (2015, April 28). 4 Causes of Packet Loss and How to Fix Them. Retrieved July 13, 2018, from https://www.annese.com/blog/what-causes-packet-loss

Packet Loss – What is it, How to Diagnose and Fix It in your Network. (2018, May 01). Retrieved July 13, 2018, from https://www.pcwdld.com/packet-loss

Wu, Chwan-Hwa (John). Irwin, J. David. (2013). Introduction to computer networks and cybersecurity. Hoboken: CRC Press.

from 2.3 Discussion

Jul 15, 2018 1:31pm

Richard Bocchinfuso

Andrew, good post.  The only comment I would make is to be careful with using ping as the method to diagnose packet loss, great place to start if the problem is really overt, but often the issues are more complex and dropped ICMP packets can be expected behavior because they typically are prioritized out by QoS.

I typically recommend the use of paping (https://code.google.com/archive/p/paping/ (Links to an external site.)Links to an external site.) or hping3 (https://tools.kali.org/information-gathering/hping3 (Links to an external site.)Links to an external site.) to send a TCP vs ICMP request.

If you are going to use ping I would also suggest increasing the ICMP payload size, assuming the target is not rejecting ICMP requests or dropping them because of a QoS policy.

 

Lastly, there are lots of hops between your computer and using MTR is a great way to see where packets are being dropped, where the latency is, etc.

from 2.3 Discussion

Jul 15, 2018 8:50pm

Richard Bocchinfuso

Jonathan, couple of comments on your post.  While TCP packet loss and dropped packets have the same result, a discarded packet requiring retransmission.  Packet loss has an implied context stating that the the discarded packet was unintentional because of the reasons you mention above.  Dropped packets can also be intentional, for example ICMP (ping) traffic is often traffic deprioritized by QoS so these packets are intentionally dropped so they do not impact higher priority traffic.

UDP is a connectionless protocol so there is no ACK from the receiver.  Packets are sent and if they are lost there is no retransmit because there is no way for the protocol to know the packet was not delivered, with UDP data can be lost or delivered out of order.  There are implementations or UDP (e.g. – RUDP) where checks can be added to UDP to increase the reliability of protocol. UDP is often used in conjunction with multicast, if you think about multicast and how TCP and UDP work it becomes obvious why multicast works with a connectionless protocol like UDP and why TCP can only be used in unicast applications.

from 2.3 Discussion

Jul 15, 2018 9:35pm

Richard Bocchinfuso

For anyone looking to play with packet sniffing, regardless of the sniffer it is always good to capture a quality workload, be able to modify your lab environment and replay the workload to see what happens.  Windump (tcpdump (Links to an external site.)Links to an external site. for Windows) is great tools to capture traffic to a pcap file, but I would also become familiar with tcpreplay (Links to an external site.)Links to an external site..  You probably want to trade in that Windows box for Linux, my distro of choice for this sort of work is Parrot Security OS (Links to an external site.)Links to an external site..  There is one Windows tool I really like, called NetworkMiner (Links to an external site.)Links to an external site., check it out.  I would also get familiar with GNS3 (Links to an external site.)Links to an external site. and the NETem (Links to an external site.)Links to an external site. appliance.  So many great tools out there but GNS3 is a critical tool for learning.  Capturing a quality workload to a pcap, modifying your lab network with GNS3 and using tcpreplay to replay the workload while observing behavior provides a great way to experiment and see the impact. Looking ahead, GNS3 provides a way to apply the routing and subnetting theory that it looks like we’ll be diving into in week three.

 

Assignment

[google-drive-embed url=”https://docs.google.com/document/d/1hv4OtK-lxTN-HrsT3sLci5zfuTTFaD6cdFTYL_2acW8/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5157 – Week 2 – Assignment 2″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

FIT – MGT5157 – Week 1

The submissions for this assignment are posts in the assignment’s discussion. Below are the discussion posts for Richard Bocchinfuso, or you can view the full discussion.

What is the internet2? What implications does it hold to the current internet infrastructure?

Super interesting question because while I may have heard of Internet2 years ago I can’t say I ever really knew what it was. I also think it’s interesting in given Tim Berners-Lee’s recent comments on his regrets (Links to an external site.)Links to an external site. about what he was so pivotal in creating, the Internet.

As I read about Internet2, I can’t help, but to think about how it parallels ARPANET and NSFNET. Rather than trying to create a network and pass the first packets like the ARPANET the Internet2 consortium has the goal of innovating new Internet technologies to meet the demands of the modern connected world.

Leonard Kleinrock does a fabulous job explaining what was the first router, a packet switch built by BBN (Bolt Barneck and Newman) and the first message sent across the Internet (the ARPANET then) between UCLA and SRI (Standford Research Institute). (Kleinrock, 2009)

I also highly recommend a documentary called “Lo and Behold, Reveries of the Connected World” (It is on Netflix).

If you have spare time and want to dig deeper Charles Severance has a great Coursera class called “Internet History, Technology, and Security” (Links to an external site.)Links to an external site. which I also recommend.

The Internet2 is both a research and development initiative, but it is a tangible domestic U.S. nationwide carrier class hybrid optical and/or packet network with the goal of supporting research facilities in their development of advanced Internet applications. (Wu & Irwin, 2013, p. 10)

Funny how similar these maps below look; the parallel between the Internet2 map and the NSFNET map is not a coincidence. The infrastructure required to build these networks is owned by few providers and these organizations invest heavily in lobbyists to block new entrants. It’s a game that undoubtedly slows innovation. Just read about the challenges that Google Fiber (Links to an external site.)Links to an external site.had trying to lay Fiber. (Brodkin, 2017)

The Internet2 backbone.

Source:  Wu, Chwan-Hwa (John); Irwin, J. David. Introduction to Computer Networks and Cybersecurity (Page 11). CRC Press. Kindle Edition.

Source:  Wikipedia. (2018, July 03). National Science Foundation Network. Retrieved July 6, 2018, from https://en.wikipedia.org/wiki/National_Science_Foundation_Network

 

Regarding what implications does Internet2 hold to the current internet infrastructure?  Internet2 seems to be focused on Research and Education, not all that different from the objectives of ARPANET, CSNET, and NSFNET.  Internet2 to is aiming to solve the problems of the modern Internet focused on innovating to enable research and education, these include innovations that aim to increase bandwidth, remove bottlenecks, and enable software-defined networking.

The one thing that concerned me is in my research I did not see the role of commercial partners like Netflix and Google.  This concerns me because we live in a time where these two providers alone are responsible for > 50% of Internet traffic.  This means that massive backbone providers like Level 3 and Cogent are carrying a ton of Netflix and Google (more specifically YouTube) traffic.  Unlike the days of ARPANET, commercial entities have a massive role in the evolution and innovation of the Internet.  While CERN is mentioned, I think we would be remiss in not realizing that there is a migration of data, even in research and education to the cloud, which means that Amazon becomes the carriers customer not the research or education institution.

Internet goliaths like Google, Facebook, Netflix, and Amazon are struggling to buy off-the-shelf infrastructure to support their massive needs.  All of these providers are building infrastructure and in many cases open sourcing the how-to documentation.  There is no doubt that we live in interesting technological times.

For example, here is what NASA JPL (Jet Propulsion Laboratory) did with AWS:

With all that said, implications of Internet2 on the current Internet, not much that I can see.  It would seem to me that Internet2 will need to focus on a niche to even remain relevant.

One final thought. Did the Internet2 consortium have something to do with us moving off that prehistoric LMS we were using, to Canvas, if so, keep up the great work.  The ability to create rich media posts, how revolutionary.  ¯\_(ツ)_/¯

References

Brodkin, J. (2017, November 24). AT&T and Comcast lawsuit has nullified a city’s broadband competition law. Retrieved July 6, 2018, from https://arstechnica.com/tech-policy/2017/11/att-and-comcast-win-lawsuit-they-filed-to-stall-google-fiber-in-nashville/

Brooker, K. (2018, July 02). “I Was Devastated”: The Man Who Created the World Wide Web Has Some Regrets. Retrieved July 6, 2018, from https://www.vanityfair.com/news/2018/07/the-man-who-created-the-world-wide-web-has-some-regrets

Kleinrock, L. (2009, January 13). The first Internet connection, with UCLA’s Leonard Kleinrock. Retrieved July 6, 2018, from https://youtu.be/vuiBTJZfeo8

Techopedia. (2018, July 6). What is Internet2? – Definition from Techopedia. Retrieved July 6, 2018, from https://www.techopedia.com/definition/24955/internet2

Wikipedia. (2018, July 03). National Science Foundation Network. Retrieved July 6, 2018, from https://en.wikipedia.org/wiki/National_Science_Foundation_Network

Wu, Chwan-Hwa (John). Irwin, J. David. (2013). Introduction to computer networks and cybersecurity. Hoboken: CRC Press.

Hailey, good post, I enjoyed reading it.  I have to say I wonder how relevant a private research and education network can be in today’s age.  The project seems way underfunded to me given the dollars being put into Internet capacity from huge players in the space.  The other thing that makes me wonder if Internet2 is viable is the fact that it is a domestic network living in an increasingly flat world.  Will research and education institutions using Internet2 connectivity be able to ride the network to Microsoft’s submergible data center?

Just don’t know about Internet2.  Information and mission feel a little dated.  100 Gigabit connectivity is everywhere today, these speeds are no longer just for carrier interconnects, they are everywhere in the modern data center.

The private sector is moving pretty fast and they have to innovate for competitive advantage, the amount of cash being dumped into moonshot idea in the private sector is unprecedented which I think creates an even bigger problem for the long-term viability of Internet2.

James, good post and you make some very good points.  Five years ago most enterprises leveraged private MPLS (Multiprotocol Label Switching) networks to build their WAN (Wide Area Network) for things like intranet communication, unified communications, etc…  This reminds me of the Internet2 value proposition.

Source:  Maupin, 2016

Fast forward to today and MPLS is being supplanted at an alarming rate by technologies like SD-WAN (Software Defined WAN).  Proponents of MPLS argue that once your packets hit the public Internet, you will not be able to guarantee low levels of packet loss, latency, and jitter.  Sound familiar to any of the research on this topic?

OK, this might be somewhat true, you can’t guarantee QoS (Quality of Service) on the internet.  But, now let’s pause for a minute and think about the context of how the market is shifting, cloud-based computing has had a major impact on the industry.  Cloud-based communications companies like 8×8 (Links to an external site.)Links to an external site., where the CEO happens to be a Florida Institute of Technology graduate (Links to an external site.)Links to an external site. have challenged these notions and pushed technologies like SD-WAN to address the issues of packet loss, latency, and jitter that make public Internet circuits a problem in certain use cases.

I always ask myself, would Author Rock (Links to an external site.)Links to an external site. put his money here?  Based on what I know about Internet2, at this point, I would say probably not.

References

Maupin, R. (2016, May 24). Have I designed correctly my MPLS network? Retrieved July 6, 2018, from https://networkengineering.stackexchange.com/questions/30673/have-i-desiged-correctly-my-mpls-network

Assignment

[google-drive-embed url=”https://docs.google.com/document/d/1YgwjXzOziekTZR4vqeZM0S-lQYfuORMntZfKTSFzFQI/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5157 – Week 1 – Assignment 1″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

FIT – MGT5156 – Week 8

Essay Assignment

You are the CISO of a large company. Using your own machine as an example, tell me how you would harden your own machine and how you would harden machines across the company, using ideas garnered from this class.

[google-drive-embed url=”https://docs.google.com/document/d/1R3qcJYeizUXoUgcyTjWJq5ijO6PPWsAooJ_cegPl4WQ/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5156 – Week 8 – Assignment 1″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

 

Final Exam

[google-drive-embed url=”https://docs.google.com/document/d/1XexEQY0nD5tzZXTnXDsEbe23zcFjt7c6iG_WEYPzPJo/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5156 – Final Exam” icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

FIT – MGT5156 – Week 7

Discussion Post

Desktop Virtualization
Discuss whether desktop virtualization is a panacea.

No, virtualization (desktop or server) is not a panacea. While complex, attackers can exploit hypervisor technology by virtualizing an operating system and running the malware at a level below the virtualized workloads, at the hypervisor layer. This approach makes the malware very hard to detect and operating system agnostic. (Ford, 2018) This type of malware has become know as a virtual-machine based rootkit (VMBR). A VMBR installs a virtual-machine monitor (VMM) underneath an existing operating system (guest os or virtual machine) and hoists the original operating system into a virtual machine. (King & Chen, 2006)

Virtualization can be very helpful for malware analysis. Virtualization can provide isolation, it can create a trusted monitor so the hypervisor can watch how the system works preventing the hypervisor from being tampered with, and it can allow for rollback or disposable computing which can be very useful for malware testing. (Ford, 2018) While countless benefits are derived from virtualization, the hypervisor is just software, and like any other software, it can have vulnerabilities. If the hypervisor were to be exploited, it could provide an attacker with low-level system access which could have serious, widespread implications. Successful exploitation of the hypervisor would give the attacker full control over everything in the hypervisor environment, all virtual machines, data, etc. (Obasuyi & Sari, 2015)

The “cloud” makes extensive use of virtualization technologies. (Ford, 2018) For example, Amazon Web Services (AWS), is built on the Xen hypervisor. Given the security concerns mentioned above and associated with the hypervisor, you can see the concern given the scale and multi-tenancy of cloud providers. (Vaughan-Nichols, 2015) Let’s face it the cloud is one giant honeypot; it’s hard to say “if” and more likey “when” will a low-level exploit happen in the cloud. Only time will tell.

To bring it back to desktop virtualization I might argue that the security concerns with desktop virtualization exceed the security concerns with server virtualization, for one reason, linked clones. The use of linked clones is quite common in desktop virtualization, but with all virtual desktops sharing common executables and libraries, malware can metastasize with each virtual desktop instantiation, and this would not require a compromised hypervisor, but rather a compromised master image. The other thing which we need to consider is transparent page sharing and the potential manipulation of EXEs and DLLs in memory at the hypervisor level and the impact it could have.

References

Ford, R. (2018, June 11). Virtualization. Retrieved June 11, 2018, from http://learningmodules.bisk.com/play.aspx?xml=L0Zsb3JpZGFUZWNoTUJBL01HVDUxNTYvQ1lCNTI4ME0xMFYxL0RhdGEvbW9kdWxlLnhtbA

King, S. T., & Chen, P. M. (2006). SubVirt: Implementing malware with virtual machines. Paper presented at the , 2006 14 pp.-327. doi:10.1109/SP.2006.38

Obasuyi, G. C., & Sari, A. (2015). Security Challenges of Virtualization Hypervisors in Virtualized Hardware Environment. International Journal of Communications, Network and System Sciences, 08(07), 260-273. doi:10.4236/ijcns.2015.87026

Vaughan-Nichols, S. J. (2015, December 04). Hypervisors: The cloud’s potential security Achilles heel. Retrieved June 13, 2018, from https://www.zdnet.com/article/hypervisors-the-clouds-potential-security-achilles-heel/

 

Discussion Response 1

I enjoyed your post, would like to offer up some food for thought.

There are lot’s of good reasons for Desktop Virtualization, the catalysts that I see typically revolve around centralized command and control, with the desire for centralized command and control often being aided by regulatory and/or compliance requirements. Five or so years ago we were seeing a huge push towards desktop and application virtualization on platforms like Citrix Xen Desktop, Citrix Xen AppA, and VMware View but this trend seems to have slowed, it’s not hard to understand why.

Let’s look at a few of the challenges with desktop virtualization. From a security perspective, you now have the east-west traffic to be concerned, this is the traffic taking place on the same physical hardware, not ingressing or egressing the physical hardware (north-south traffic) so network security protocols don’t really work. This was a general hypervisor problem which has been addressed, but a concern nonetheless. Next, we have the unpredictable performance profile of end-user usage, one user performing an I/O intensive process has the ability to impact all other users on that physical system. Then there is the con of centralization, the risk, a shared component outage has a much larger blast radius. All of these contributing factors make desktop virtualization fairly costly.

New technologies like SasS and browser-based apps, the rich user experience of HTML5, the ease of cross-platform development, the BYOD push, etc… seem to have slowed the desktop virtualization craze. Desktop virtualization is still happening, but it seems to have slowed. I use virtual desktops all the time for remote access or to run thick apps, but the virtual desktop is used more like an application rather than as a day-to-day shell from which I work. IMO as long as there is Java and the umpteen versions of Java, compatibility issues between apps and Java version, etc… we will have a need to use the virtual desktop to solve these issues. VDI also allows us to take think client applications and quickly centralize them, although I know many people who have done this who wish they just did an app rewrite rather than spending the time build VDI.

I agree with the VirtualizedGeek, that DaaS is a better solution than VDI for those of us need a cloud-based Windows desktop. (VirtualizedGeek, 2014) The article is a bit dated and today many of us use AWS Workspaces or another DaaS solution for this very reason. I also agree with Ben Kepes that “Desktop as a Service is last year’s solution to last decade’s problem.” (Kepes, 2014) The bottom line is the move toward mobile and web apps will continue so while VDI may not be dying I don’t expect it to flourish.

References

Kepes, B. (2013, November 06). Death To VDI. Or DaaS. Or Whatever It’s Called This Week. Retrieved June 17, 2018, from https://www.forbes.com/sites/benkepes/2013/11/06/death-to-vdi-or-daas-or-whatever-its-called-this-week/#3e4c3295096a

Rouse, M. (2018, June 17). What is east-west traffic? – Definition from WhatIs.com. Retrieved June 17, 2018, from https://searchsdn.techtarget.com/definition/east-west-traffic

VirtualizedGeek. (2014, February 18). VDI is dying so what now? Retrieved June 17, 2018, from http://www.virtualizedgeek.com/2014/02/vdi-token-ring/

 

Discussion Response 2

Enjoyed the post, great read as usual, always like the emotion in your writing.

I have one rule about technology, it is to never make a technology decision based on “saving money”. When the primary value proposition is “you’ll save money” it almost always tells the story that there is no other value proposition that is meaningful enough to be a motivator. I have yet to meet someone who made the decision to implement VDI for cost savings that are happy they made the decision. I have met those who had to do it for regulatory and compliance purposes who likely spent and continue to spend more on their virtual desktop infrastructure than they would have spent deploying desktops, these folks still may not be happy but they are committed to the technology to solve a business problem that they have yet to find another solution to.

Desktop virtualization has been around for a long time, Citrix the undisputed leader in the space started in 1989 with the development of their protocol called ICA (Independent Computing Architecture). In the late 1990s Citrix release MetaFrame 1.0 to match the release of Microsoft Terminal Server. Citrix capitalized on the weakness of Microsoft RDP protocol and MetaFrame and the ICA protocol became the defacto standard for multi-tenancy at scale. The mainframe and mini-computer world was used to multi-tenancy but Citrix had brought multi-tenancy to the micro-computer and Wintel platform. This market pivot actually has close parallels to the cloud pivot we are seeing in enterprise computing today. In the 90s and early 2000s consumers listened to vendors, today consumers listen to the community, the biggest voices are those consuming the platform at scale, fortunately for Citrix this wasn’t the case as they rose to market prominence. There is no doubt that today Netflix holds as much weight on a new user using AWS as AWS itself, Netflix is the 900-pound consumer gorilla and their lessons learned are consumer lessons, not the lessons of AWS who want you on the platform. The Netflix lessons are extremely relevant to the cloud, but they are also relevant to a move to multi-tenancy in any way, VDI being one example. I think we are quickly moving past the days where “a guy with a huge handlebar mustachio with a cape on the back of a wagon” can espouse a cure-all. And for those willing to buy, well, in today’s day and age it feels more like natural selection than someone being bamboozled.

Here are some of the publically available Netflix lessons with some personal commentary. I love these lessons learned and I use them in different contexts all the time. (Netflix Technology Blog, 2010) (Townsend, 2016)

  1. Dorothy, you’re not in Kansas anymore. It’s going to be different, new challenges, new methods and a need to unlearn much of what you are used to doing.
  2. It’s not about cost savings. Focus on agility and elasticity.
  3. Co-tenancy is hard. Architecture and process matter more than ever.
  4. The best way to avoid failure is to fail constantly. This is one that many enterprises are unwilling to accept. Trading the expectation of uptime for the expectation of failure and architecting to tolerate failure.
  5. Learn with real scale, not toy models. Buying a marketecture is not advisable, you need to test with your workloads, at scale.
  6. Commit yourself. The cost motivator is not enough, the motivator has to me more.
  7. Talent. The complexity and blast radius of what you are embarking on is significant, you need the right talent to execute.

The consumption and effective use of ever-changing and complex services require us to think differently. Netflix consumes services on AWS and because they don’t have to build hardware, install operating systems, build object storage platforms, write APIs to abstract and orchestrate the infrastructure, etc… they can focus on making their application more resilient by building platforms like the Simian Army (Netflix Technology Blog, 2011) and other tools like Hystrix (Netflix Technology Blog, 2012) and Visceral (Netflix Technology Blog, 2016). The biggest problem with technologies that seemingly make things simpler is that the mass-market consumer looks for cost saving, they look for things to become easier, to lessen the hard dollar spend, to lessen the spend on talent, etc… and they don’t redirect time or dollars to the new challenges created by new technologies, this is a recipe for disaster.

References

InfoQ. (2017, February 22). Mastering Chaos – A Netflix Guide to Microservices. Retrieved June 17, 2018, from https://youtu.be/CZ3wIuvmHeM

Netflix Technology Blog. (2010, December 16). 5 Lessons We’ve Learned Using AWS – Netflix TechBlog – Medium. Retrieved June 17, 2018, from https://medium.com/netflix-techblog/5-lessons-weve-learned-using-aws-1f2a28588e4c

Netflix Technology Blog. (2011, July 19). The Netflix Simian Army – Netflix TechBlog – Medium. Retrieved June 17, 2018, from https://medium.com/netflix-techblog/the-netflix-simian-army-16e57fbab116

Netflix Technology Blog. (2012, November 26). Introducing Hystrix for Resilience Engineering – Netflix TechBlog – Medium. Retrieved June 17, 2018, from https://medium.com/netflix-techblog/introducing-hystrix-for-resilience-engineering-13531c1ab362

Netflix Technology Blog. (2016, August 03). Vizceral Open Source – Netflix TechBlog – Medium. Retrieved June 17, 2018, from https://medium.com/netflix-techblog/vizceral-open-source-acc0c32113fe

Townsend, K. (2016, February 17). 5 lessons IT learned from the Netflix cloud journey. Retrieved June 17, 2018, from https://www.techrepublic.com/article/5-lessons-it-learned-from-the-netflix-cloud-journey/

 

Essay Assignment

In an essay form, develop an example of an XSS vulnerability and an exploit which displays it. You will be expected to include a snippet of code which illustrates an XSS vulnerability and also provides some general discussion of XSS vulnerabilities.

[google-drive-embed url=”https://docs.google.com/document/d/1bdAAfGcwrgux3_DYSNcyIbWBvUGIDzyRMDYtLuv1u2g/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5156 – Week 7 – Assignment 1″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

 

Web Vulnerabilities Module Assignment

[google-drive-embed url=”https://docs.google.com/document/d/1-mA202z2ANNYRwkeK_LE-vOWVR-ioXGF9TD9TLRRVqk/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5156 – Week 7 – Assignment 2″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

FIT – MGT5156 – Week 6

Discussion Post

Discuss how testing of ani-malware should be conducted.

The only absolute rule seems to be, don’t conduct anti-malware testing on your production systems. Testing of anti-malware should be performed in an isolated malware testing environment, and care should be taken to ensure that the system is completely isolated. For example, if you construct a malware test lab using a hypervisor and virtual machines, but keep the virtual machines on your production network, well, let’s say that’s not isolated. If correctly set up and configured hypervisors and virtual machines can be a testers best friend.

The Anti-Malware Testing Standards Organization (AMTSO) had developed and documented all sorts of testing guidance from Principles of Testing to Facilitating Testing. The key here is that the testing method must be safe and it must use methods which are generally accepted (consistent, unbiased, transparent, empirical, etc.) (AMTSO, 2018)

The use of generally accepted tools and toolkits for malware research, testing and analysis can easily overcome certain testing obstacles, allowing the analyst to focus on the testing methodology rather than the acceptance of a specific testing tool or platform. Safely conducting testing and ensuring that you are not endangering yourself and others is the burden of the analyst; the complexity of the technologies being used to construct isolated environments and the malware itself can make this complicated, so there is plenty of room for error.

My two favorite toolkits for malware testing are:

  • Flare VM (Kacherginsky, 2017) is essentially a PowerShell script that used BoxStarter and Chocolatey to turn a >= Windows 7 machine into a malware analysis distribution by quickly loading all the tools you need to do malware analysis.
  • REMnux is a Linux distribution for malware analysis and reverse-engineering. Like Flare VM, REMnux contains a set of tools to perform malware analysis and reverse engineering. Because REMnux is built on Linux (an open source operating system), it can be deployed using an install script like Flare VM or via a virtual machine (VM) image which packages the OS and tools making it easy to download, deploy and use.

There are a plethora of security-focused Linux distributions like Kali LinuxBackbox Linux, and the distribution which I use, Parrot Linux. All of these Linux based security-focused distributions offer some of the tools required for malware analysis, but none are focused on malware analysis like REMnux.

Anti-malware is a requirement; it is the last line of defense. Simple malware scanners, heuristics, activity/anomaly-based detection, is not enough. Next generation anti-malware and real-time scanning and discovery is a necessity. Malware can be identified using real-time detection technologies by monitoring activities like:

  • Attempts to alter restricted locations such as registry or startup files.
  • Attempts to modify executables.
  • Opening, deleting or editing files.
  • Attempts to write to or modify the boot sector.
  • Creating, accessing or adding macros to documents.

Not all anti-virus and anti-malware is created equal. avtest.org conducts independent analysis on the efficacy of anti-virus and anti-malware solutions, services like this can be an excellent resource for those looking to make the right decision when selecting anti-virus and anti-malware solutions.

I love this quote: “People have to understand that anti-virus is more like a seatbelt than an armored car: It might help you in an accident, but it might not,” Huger said. “There are some things you can do to make sure you don’t get into an accident in the first place, and those are the places to focus because things get dicey real quick when today’s malware gets past the outside defenses and onto the desktop.” (Kerbs, 2010)

References

Adams, J. (2016, June 8). Building a Vulnerability/Malware Test Lab. Retrieved June 6, 2018, from https://westoahu.hawaii.edu/cyber/building-a-vulnerability-malware-test-lab/

AMTSO. (2018, June 6). Welcome to the Anti-Malware Testing Standards Organization. Retrieved June 6, 2018, from https://www.amtso.org/

Kacherginsky, P. (2017, July 26). FLARE VM: The Windows Malware Analysis Distribution You’ve Always Needed! « FLARE VM: The Windows Malware Analysis Distribution You’ve Always Needed! Retrieved June 6, 2018, from https://www.fireeye.com/blog/threat-research/2017/07/flare-vm-the-windows-malware.html

Kerbs, B. (2010, June 25). Krebs on Security. Retrieved June 6, 2018, from https://krebsonsecurity.com/2010/06/anti-virus-is-a-poor-substitute-for-common-sense/

REMnux. (2018, June 6). REMnux: A Linux Toolkit for Reverse-Engineering and Analyzing Malware. Retrieved June 6, 2018, from https://remnux.org/

Williams, G. (2018, June 6). Detecting and Mitigating Cyber Threats and Attacks. Retrieved June 6, 2018, from https://www.coursera.org/learn/detecting-cyber-attacks/lecture/xE8ns/snort

 

Discussion Response 1

Good post. IMO it’s essential when discussing anti-malware to consider attack vectors. While anti-malware heuristics are getting better, aided by deep learning, the primary attack vector remains the user, and it seems unlikely that a change in trajectory is on the near-term horizon. Attackers use numerous attack vectors, and when I think about the needle used to inject the virus I think about examples such as:

  • Spam: Where email or social media are the delivery mechanism for malware.
  • Phishing, Spear Phishing, Spoofing, Pharming: Where attackers impersonate legitimate sources or destinations to trick unsuspecting victims to sites that capture personal information, exploit them, etc.

I use the examples above as a way to convey that exploitation often begins with the exploitation of an individual, this happens before the malware infects their system. A lack of knowledge, skill, vigilance, a sense of trust, etc. are all too often the root cause of an issue.

I just recently started taking a Coursera course called “Usable Security” and one area they focus on is HCI (Human-Computer Interaction). They stress how important it is for the designer to make the safeguards understandable and usable, not by the minority of experts but by the majority of casual users. They use two specific examples, at least so far. The first example is a medial cart with a proximity sensor. On paper, the proximity sensor seems like a great idea, but it turns out the doctors didn’t like it, so they covered the proximity sensors with styrofoam cups making the system less effective than the prior system which required the doctor to lock the computer after their session and a reasonable login timeout. The second is the SSL warning system in Firefox, the warning you get about an expired or unsigned certificate, sighting that most people don’t know what this means and add an exception without much thought.

Over the years I have observed the situations like the above with anti-malware software. The software slows the system down, do the tech user disables it or the anti-malware software reports so many false positives that the tech user disables it. The bottom line is there no replacement for human vigilance. I wonder if we can get to a place where the software can protect the user from himself or herself. Whatever the solution, I believe it will need to be frictionless, we aren’t there yet, but maybe someday.

References

Golbeck, J. (2018, June 10). Usable Security. Retrieved June 10, 2018, from https://www.coursera.org/learn/usable-security University of Maryland, College Park

Texas Tech University. (2018, June 10). Scams – Spam, Phishing, Spoofing and Pharming. Retrieved June 10, 2018, from https://www.ttu.edu/cybersecurity/lubbock/digital-life/digital-identity/scams-spam-phishing-spoofing-pharming.php

 

Discussion Response 2

All good points.  Seems almost inconceivable that a tester would be testing something for which they have no knowledge, but of course, we know this is often the case (and this goes way beyond anti-malware software).

You bring up a good point regarding what the tester is testing for. I think we have seen the era of “total security” products that cover everything from firewall to anti-malware, this is likely born from necessity and the need to move from reactive defensive anti-malware focused on scans to provocative strategies which attempt to keep the malware out rather than just focusing on detection and remediation after the fact. I think we are seeing systems emerge today which leverage data mining and deep learning to better protect users. With the level of sophistication being used in both malware and anti-malware I can’t imagine the role of the tester getting any easier. We live in interesting times and on a positive note, I think we can anticipate that they will only get more interesting.

 

Discussion Response 3

Good post. We’ve certainly seen some leaders in the security field have their ethics and motives questioned, most notably Kaspersky Lab (Volz, 2017). I have to admit in the case of Kaspersky Lab it’s hard to not wonder if this isn’t just a bunch of legislators who may have a bigger struggle with ethics and motivation than Kaspersky Lab does, this is a slippery slope. We live in a global economy and having read what Kaspersky Lab volunteered to do, I can’t wonder if this move may have some marketing flare associated with it. avtest.org has consistently rated Kaspersky Lab anti-malware among the best in the industry (AV-TEST, 2018). Is it possible that the Kremlin could have an influence on Kaspersky Lab? I suppose it is (Matlack, Riley & Robertson, 2015), but do I think this was the motivation for the legislation, not likely.

References

AV-TEST. (2018, April 06). AV-TEST – The Independent IT-Security Institute. Retrieved June 10, 2018, from https://www.av-test.org/en/award/2017/

Matlack, C., Riley, M., & Robertson, J. (2015, March 19). Cybersecurity: Kaspersky Has Close Ties to Russian Spies. Retrieved June 11, 2018, from https://www.bloomberg.com/news/articles/2015-03-19/cybersecurity-kaspersky-has-close-ties-to-russian-spies

Volz, D. (2017, December 12). Trump signs into law U.S. government ban on Kaspersky Lab software. Retrieved June 10, 2018, from https://www.reuters.com/article/us-usa-cyber-kaspersky/trump-signs-into-law-u-s-government-ban-on-kaspersky-lab-software-idUSKBN1E62V4?utm_source=applenews

 

Essay Assignment

How does anti-malware software detect viruses? What techniques are available, and how do they differ?

[google-drive-embed url=”https://docs.google.com/document/d/1-5Gmk97WZQ5eKVcsoXGSzJ8VJIklMxLA7U-onkgc5-I/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5156 – Week 6 – Assignment 1″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

 

Viruses and Virus Detection Module Assignment

[google-drive-embed url=”https://docs.google.com/document/d/1tgs93eHUzrPxTmzt2hOQt6XA9naQ3SKi7yvEd1df8w8/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5156 – Week 6 – Assignment 2″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

FIT – MGT5156 – Week 5

Discussion Post

Wow, week five already! The long weekend helped me get caught up and break the cycle I’ve been on, yay!

While not the latest in malware I decided to discuss WannaCry (also known as WCry or WanaCryptor). (Hunt, 2017) The reason for my choice is I have personal experience with this self-propagating (worm-like) ransomware. I have spent the last year working on various projects to mitigate the potential impact of ransom malware like WannaCry. In this post, I will explain the ransomware approach that WannaCry took, as it does not differ that dramatically from most recent ransomware. I will also talk a bit about some of the projects that I have been involved in, some of my customer’s concerns and some mitigation strategies like WORM (Write once read many, 2018) and Isolated Recovery (Korolov, 2016) that I have helped automate and implement for customers.

A simple explanation of WannaCry is that it encrypts files, rendering them useless and demands a ransom be paid in bitcoin, of course, to have the files decrypted.

Some basic information on WannaCry (Berry, Homan & Eitzman, 2017):

  1. WannaCry exploits a vulnerability in Microsoft’s Server Message Block (SMB) protocol (also known as CIFS of Common Internet File System). (Microsoft, 2017) For our purposes, we can consider SMB and CIFS are synonymous, but in the interest of education the SMB protocol was invented by IBM in the mid-1980’s and CIFS is Microsoft’s implementation of SMB.
  2. The WannaCry malware consists of two key functions, encryption, and propagation.
  3. WannaCry leverages an exploit called EternalBlue (NVD, 2017) to exploit the vulnerability in Microsoft’s SMB protocol implementation.
  4. What makes WannaCry and other ransomware attacks incredibly dangerous is that once on a corporate network they begin propagating using vulnerabilities in sharing protocols like SMB. It’s difficult to firewall these protocols because they are heavily used by users to share data across secure networks.

Ransomware attacks like WannaCry, NotPetya, and Locky created serious concern across many enterprises who store terabytes and petabytes of data on shares which are accessed using the SMB protocol. Organizations started thinking about how they could mitigate the risk of ransomware and what their recovery plan would be if they were hit with ransomware.

Many customers who share data on the Windows server platform leverage the VSS (Volume Shadow Copy Service) to take snapshots and protect / revision data. The idea of a snapshot is it is a point-in-time copy which a user can rollback to. Developers writing malicious software understand pervasive mitigation techniques like the use of VSS snapshots and they address them. Crafty developers of malicious software use vssadmin.exe to remove VSS snapshots (previous versions) so a user can’t rollback to an unencrypted version of the file(s).  (Abrams, 2016)

The obvious risk of having petabytes of data encrypted has created questions regarding the vulnerability of enterprise NAS (Network Attach Storage) devices from manufacturers like DellEMCNetApp, etc… Enterprise-class NAS devices provide additional safeguards like filesystems which are NTFS, no hooks to vssadmin, read-only snapshots, etc… so the protections are greater, but corporations are still concerned with zero-day exploits so additional mitigation approaches are being developed. Backing up your data is an obvious risk mitigation practice, but many enterprises are backing up to disk-based backup devices which are accessible via the SMB protocol so this has raised additional questions and cause for concern. A model called “Isolate Recovery” which leverages an air gap (Korolov, 2016) and other protection methods to ensure that data is protected, this is more of a programmatic implementation of a process then it is a technology.

Example Topology
[HOST] <-> [NETWORK] <-> [SHARED STORAGE] <-> [NETWORK] <-> [BACKUP TARGET]
Note: This is a simple representation but what is important to know here is that the HOST, SHARED STORAGE and BACKUP TARGET (could be a disk-based backup target or a replicated storage device) are all SMB accessible.

Example Isolated Recovery Topology
[HOST] <-> [NETWORK] <-> [SHARED STORAGE] <-> [NETWORK] <-> [BACKUP TARGET] <-> [NETWORK] <-> /AIR GAP/ <-> [ISOLATED RECOVERY TARGET]
Note: In this case, there is a tertiary copy of the data which resides in an isolated recovery environment which is air gapped. This paradigm could also be applied with only two copies of the data by air gapping the backup target, little tricker, but it can be done.

From a programmatic process perspective, the process might look something like this: https://gist.github.com/rbocchinfuso/a8b688546fad294d04281ab6eb632bfd#file-isolatedrecovery-md

A WORM (write once read many, not work as in virus) process is triggered via cron or some other scheduler or trigger mechanism might look something like this: https://gist.github.com/rbocchinfuso/b78a8a3a41021fc0df9c/#file-retentionlock-sh
Note:  This script is specific to WORM on a Data Domain disk bases backup device and leverages a feature called Retention Lock. The atime (access time) (Reys, 2008) of the file(s) is changed to a date in the future which places the file in WORM compliant mode until such date, once the date is reached the file reverts back to RW and can be deleted or modified.

References

Abrams, L. (2016, April 04). Why Everyone Should disable VSSAdmin.exe Now! Retrieved May 29, 2018, from https://www.bleepingcomputer.com/news/security/why-everyone-should-disable-vssadmin-exe-now/

Air gap (networking). (2018, May 27). Retrieved May 29, 2018, from https://en.wikipedia.org/wiki/Air_gap_(networking)

Berry, A., Homan, J., & Eitzman, R. (2017, May 23). WannaCry Malware Profile. Retrieved May 29, 2018, from https://www.fireeye.com/blog/threat-research/2017/05/wannacry-malware-profile.html

Hunt, T. (2017, May 18). Everything you need to know about the WannaCry / Wcry / WannaCrypt ransomware. Retrieved May 29, 2018, from https://www.troyhunt.com/everything-you-need-to-know-about-the-wannacrypt-ransomware/

Korolov, M. (2016, May 31). Will your backups protect you against ransomware? Retrieved May 29, 2018, from https://www.csoonline.com/article/3075385/backup-recovery/will-your-backups-protect-you-against-ransomware.html

Reys, G. (2008, April 11). atime, ctime and mtime in Unix filesystems. Retrieved May 29, 2018, from https://www.unixtutorial.org/2008/04/atime-ctime-mtime-in-unix-filesystems/

Microsoft. (2017, October 11). Microsoft Security Bulletin MS17-010 – Critical. Retrieved May 29, 2018, from https://docs.microsoft.com/en-us/security-updates/securitybulletins/2017/ms17-010

NVD. (2017, March 16). NVD – CVE-2017-0144 – NIST. Retrieved May 29, 2018, from https://www.bing.com/cr?IG=F94DFB39323448E6A46972AE19E1BB95&CID=304F78623FEB653F3DCF739C3E166483&rd=1&h=Dh-3S1QaiFT9tJkWNYeBAluFO8Y9ylpehdjBtEs6kAU&v=1&r=https://nvd.nist.gov/vuln/detail/CVE-2017-0144&p=DevEx.LB.1,5527.1

Write once read many. (2018, April 10). Retrieved May 29, 2018, from https://en.wikipedia.org/wiki/Write_once_read_many

 

Discussion Response 1

Good post on a very relevant and current topic.   IMO this trend will continue, the replacement of ASICs and RTOS with commodity ARM/x86 architecture and Linux makes it a lot easier for someone to create malicious code that can exploit routers across multiple manufacturers like Linksys, MikroTik, Netgear, and TP-Link.  I remember 20 years ago when if you wanted to go fast you used an ASIC and an RTOS like VxWorks, but x86 got so fast that ASICs no longer made sense for most applications, the ability to commoditize the hardware with a general purpose OS like Linux drove down cost and increased release velocity, a win all around.  With that said I think we may be on the doorstep fo a new cycle, we are seeing general purpose GPUs being used for everything from machine learning to crypto mining, these are essentially general purpose integrated circuits.  Power and environmental requirements are a big deal with general purpose GPUs and I believe we are on the doorstep of a cycle that sees the return of the ASIC. The TPU is is the beginning of what I believe will be a movement to go faster, get greener and more secure.

 

Discussion Response 2

Well done, as usual, well researched written and engaging exploration of different types of malware.
Response short this week because I spent most of my reading and responding time on Dr. Ford’s polymorphic coding challenge, a great exercise, wish there was more work like this.

 

Discussion Response 3

Dr. Ford’s polymorphic coding challenge

Anyone else given Dr. Ford’s polymorphic coding challenge a try?

Here is where I am:

  1. I am a Linux user so fired up a Win7 VM (suppose I could have done this in a dosbox or qemu freedos session, like Dr. Ford suggested, but been so long since I worked in 80 columns I find it unbearable).
  2. Used Bloodshed Dev-C++ w/ MinGW as C compiler.
  3. Got this far but I think I am missing something because obviously, the hex signature is the same for each .com file. Feel like this should not be the expected behavior.

Source Code: https://gist.github.com/5859ee8be77fd188f78b64eaa8538c62#file-hello-c

YouTube video of the compile, execute and hex signature view of hello0.com and hello1.com files: https://youtu.be/2vQOS4E1JB0
Note: Be sure to watch in 1080p HD quality.

I am not sure how I would alter the hex. I believe the hex code at the top of the stack needs to be what it is, the hex code for “Hello World!” just maps back to the hex for the ASCII characters.

When I look at hello0.com, hello1.com, etc… with a hex viewer the hex is the same, as you would expect. Does anyone have any thoughts on this? I would think a virus scanner would pick up this signature pretty easily.

 

Discussion Response 4

Replying to my own post with disassembled hello0.com and hello1.com files.
Wondering if this is polymorphic because hello.exe and the spawned hello?.com files have differing signatures.

> ndisasm hello0.com
00000000 0E push cs
00000001 1F pop ds
00000002 BA0E01 mov dx,0x10e
00000005 B409 mov ah,0x9
00000007 CD21 int 0x21
00000009 B8014C mov ax,0x4c01
0000000C CD21 int 0x21
0000000E 48 dec ax
0000000F 656C gs insb
00000011 6C insb
00000012 6F outsw
00000013 20576F and [bx+0x6f],dl
00000016 726C jc 0x84
00000018 642124 and [fs:si],sp

bocchrj@WIN7 C:\src\hello
> decompile –default-to ms-dos-com hello0.com

bocchrj@WIN7 C:\src\hello
> decompile –default-to ms-dos-com hello1.com

bocchrj@WIN7 C:\src\hello
> type hello0.asm
;;; Segment code (0C00:0100)

;; fn0C00_0100: 0C00:0100
fn0C00_0100 proc
push cs
pop ds
mov dx,010E
mov ah,09
int 21
mov ax,4C01
int 21
0C00:010E 48 65 He
0C00:0110 6C 6C 6F 20 57 6F 72 6C 64 21 24 llo World!$

bocchrj@WIN7 C:\src\hello
> type hello1.asm
;;; Segment code (0C00:0100)

;; fn0C00_0100: 0C00:0100
fn0C00_0100 proc
push cs
pop ds
mov dx,010E
mov ah,09
int 21
mov ax,4C01
int 21
0C00:010E 48 65 He
0C00:0110 6C 6C 6F 20 57 6F 72 6C 64 21 24 llo World!$

 

Essay Assignment

What are the financial and other models which drive malware? How do they impact the types of malware seen?

[google-drive-embed url=”https://docs.google.com/document/d/1F9ET0EbuasT_E0f_1TOUqOzaCkrciUfGozEcbCC2JfE/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5156 – Week 5 – Assignment 1″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

 

Malware History Module Assignment

[google-drive-embed url=”https://docs.google.com/document/d/13zIZqwoLdvYj72Hx6F3itdMnmQm3OKI6OjxohvzCJ_8/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5156 – Week 5 – Assignment 2″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

FIT – MGT5156 – Week 4

Discussion Post

Discuss ROP and code injection.

Late yet again, probably later than I needed to be, but like Dr. Ford said this week at the beginning of the lecture, this was the week I was waiting for, and I had to get a little dirty and break some stuff.

Code injection typically refers to getting something (data) that is not machine code to run as code. Code injection tries to take control of a machine by gaining privilege, the privilege that code injection works to obtain is the ability to run binary code.

To understand code injection and buffer overflows understanding the stack is essential. Return-Oriented Programming (ROP) focuses on overwriting a buffer on the stack, which overwrites the return address and allowing the attacker to jump back onto the stack and execute an instruction, to prevent this, a few defense mechanisms have been developed. (Ford, 2018)

  • The no-execute flag marks something in makes memory non-executable. Data Execution Prevention (DEP) works by using the no-execute flag to prevent attackers from executing data as if it were code. Attackers are unable to execute code from the stack.
  • Address Space Layout Randomization (ASLR) works by randomly moving segments of a program around memory; this prevents the attacker from predicting gadget addresses.
  • Stack cookies (canaries) is a random value written to the stack immediately preceding the return address. Before the return address is executed the system checks to see if the canary has been overwritten, it the canary has been overwritten the system will trap execution.

ROP is based on the Return-to-Libc exploit technique but uses gadgets from different areas of memory to create an executable program.

ROP gadgets may look like:
0x1000b516 : pop eax ; pop ebp ; ret
0x10015875 : pop eax ; pop ebp ; ret 0x1c
0x1000ffe3 : pop eax ; pop ecx ; xchg dword ptr [esp], eax ; jmp eax
(apriorit, 2017)

While the widespread adoption of DEP which ensures that all writable pages in memory are non-executable has made classic code injection attacks difficult, ROP has become the approach for all modern attacks. Rather than injecting malicious code the attacker chains together existing code which already exists in the stack, these code snippets which are taken from the stack and are called gadgets. (TehAurum, 2015)

I was really interested in getting some hands-on experience here to see how this worked in the real world. A bit of googling and I happened across this website: https://samsclass.info/127/proj/lbuf1.htm – I fired up a Linux machine with Vagrant on my desktop and started playing.

Here is my ASLR example
Note:  I ran in debug mode to show all the commands.  Lines prefixed by the + symbol are input commands and lines with no prefix are output.
vagrant@vagrant-ubuntu-trusty-64:~$ sh -x aslr.sh
+ echo Let’s make sure ASLR is enabled
Let’s make sure ASLR is enabled
+ sudo tee /proc/sys/kernel/randomize_va_space
+ sudo echo 1
1
+ echo Let’s look at the C code that will print the esp (pointer) memory address
Let’s look at the C code that will print the esp (pointer) memory address
+ cat esp.c
#include <stdio.h>
void main() {
register int i asm(“esp”);
printf(“$esp = %#010x\n”, i);
}
+ echo Let’s compile the source code into an executable program
Let’s compile the source code into an executable program
+ gcc -o esp esp.c
+ echo Let’s execute the the binary executable esp three times
Let’s execute the the binary executable esp three times
+ ./esp
$esp = 0xd47931b0
+ ./esp
$esp = 0x5526d700
+ ./esp
$esp = 0xf7542b00
+ echo You can see that the memory address changes each time (ASLR at work here)
You can see that the memory address changes each time (ASLR at work here)
+ echo Let’s disable ASLR
Let’s disable ASLR
+ sudo tee /proc/sys/kernel/randomize_va_space
+ sudo echo 0
0
+ echo Lets’ execute the binary executable esp three more times
Lets’ execute the binary executable esp three more times
+ ./esp
$esp = 0xffffe620
+ ./esp
$esp = 0xffffe620
+ ./esp
$esp = 0xffffe620
+ echo You can see that now the memory ddress remaind the same each tiem (ASLR disabled)
You can see that now the memory ddress remaind the same each tiem (ASLR disabled)
vagrant@vagrant-ubuntu-trusty-64:~$

I pushed on to more more complex exercises; these are both excellent ones:

A couple of pointers to get started:

  1. Get Virtualbox to build your sandbox. (https://www.virtualbox.org/wiki/Downloads)
  2. Download a Windows 7 Vbox image (https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/) to run the vulnserver executable you will get above on (note: get vulnserver.zip from the alternate link)
  3. Download Kali Linux for Vbox image (https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-hyperv-image-download/)

I left a bunch of stuff out like how to configure networking, getting going with Immunity Debugger, etc… but it’s about the journey, not the destination. Right?

References

Carlini, N., & Wagner, D. (2014, August). ROP is Still Dangerous: Breaking Modern Defenses. In USENIX Security Symposium (pp. 385-399).

Ford, R. (2018, May 23). Vulnerabilities: How Things Go Wrong, Part 2. Retrieved May 23, 2018, from http://learningmodules.bisk.com/play.aspx?xml=L0Zsb3JpZGFUZWNoTUJBL01HVDUxNTYvQ1lCNTI4ME04VjEvRGF0YS9tb2R1bGUueG1s

Shacham, H. (2007, October). The geometry of innocent flesh on the bone: Return-into-libc without function calls (on the x86). In Proceedings of the 14th ACM conference on Computer and communications security (pp. 552-561). ACM.

TehAurum. (2015, December 30). Exploit Development: Stack Buffer Overflow – Bypass NX/DEP. Retrieved May 25, 2018, from https://tehaurum.wordpress.com/2015/06/24/exploit-development-stack-buffer-overflow-bypass-nxdep/

apriorit. (2017, June 02). ROP Chain. How to Defend from ROP Attacks (Basic Example). Retrieved May 25, 2018, from https://www.apriorit.com/dev-blog/434-rop-exploit-protection

 

Discussion Response 1

Good post, given that you were interested in tips I thought I would respond with a toolkit for playing with buffer overflows and code injection.
The toolkit (assuming you are a Windows user):

  • Code:Blocks:  http://www.codeblocks.org/downloads/binaries
    • This is a good free C IDE and Compiler
    • Note:  Grab either codeblocks-17.12mingw-setup.exe or codeblocks-17.12mingw-nosetup.zip
    • Note:  To use the debugger you will need to set it up Setting -> Debugger -> Default, and enter path to gdb32 (on my system this is  path_to\codeblocks-17.12mingw-nosetup\MinGW\gdb32\bin\gdb32.exe but it will vary, just find gdb32.exe and enter full path here).  You will also need to make sure you create a project and add .c files to the project otherwise you won’t be able to debug.
  • OllyDbg:  http://www.ollydbg.de/ or IDA https://www.hex-rays.com/products/ida/support/download.shtml
    • Both good debuggers and disassemblers that will let you view the stack.

Some code to get started with:
/* vuln.c */
#include <stdlib.h>
#include <stdio.h>
#include <string.h>

int func (char *str)
{
char buffer[5];
strcpy(buffer, str);
return 1;
}
int main(int argc, char **argv)
{
char str[517];
FILE *inputfile;
inputfile = fopen(“inputfile.txt”, “r”);
fread(str, sizeof(char), 517, inputfile);
func (str);
printf(“Returned Properly\n”);
return 1;
}

– Create a text file called inputifile.txt and place at least 517 characters in it.
– Compile and execute vuln.c
– Set a breakpoint at main() and debug to see what happens.

Play around with the size of the read or write buffer:
By Changing the value of 5 in “char buffer[5]” in func() to 517
OR
By changing the value of 517 “char str[5]” and “fread(str, sizeof(char), 5, inputfile)” in main() to 5

If you debug while you play you will start to see things happen.

Happy hacking!

 

Discussion Response 2

I’ve started doing some additional research and sandboxing because I am wondering about Return Oriented Programming (ROP) as a method to circumvent Address Space Layout Randomization (ASLR).  Need some hands-on time to really understand how gadgets can be chained given that the address space is randomized.

Doing some additional reading and experimenting to better understand the topic:

Anyway, I feel like I have a good handle on buffer overflows and ROP when ALSR is disabled.  I have played with this on Linux by disabling ASLR on Linux (sudo echo 1 | sudo tee /proc/sys/kernel/randomize_va_space) and when debugging I can see that instructions always reside in the same stack address.  OK, back to the sandbox.

 

Discussion Response 3

Sharing this link
All here is a good free sample from a Coursera and the University of Maryland that reviews much of what we spoke about this week. I found it helpful to reinforce the concepts so I am sharing with you.

https://www.coursera.org/learn/software-security/lecture/Lz5GW/low-level-security-introduction

 

Essay Assignment

Describe in detail code injection attacks and the countermeasures that exist to stop them. What future solutions are there?

[google-drive-embed url=”https://docs.google.com/document/d/1I10tpJfKkuH0GKEfTNfacW4v8ejG0euJ2GGLWoBcwPM/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5156 – Week 4 – Assignment 1″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

 

Midterm Exam

[google-drive-embed url=”https://docs.google.com/document/d/16kFzf48HQQvYNNvAqfjZWQ2mRo3Vi6_V_h2zjH0KaOs/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5156 – Midterm Exam” icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

 

Grade: 98%

FIT – MGT5156 – Week 3

Discussion Post

Discuss open source vs. closed source and security.

Another ridiculous week leads to another late discussion post, feeling like a real slacker.  Luckily things settle down next week, so I should be back on track. Apologies to my peers for my late post, yet again, all I could do this week to avoid a mental breakdown was accept a late discussion post.

Before we get started discussing the facts (or opinions of others) associated with open source vs. closed source I wanted to share some personal thoughts on this topic.  I remember installing Slackware Linux (Slackware, 2018) back in 1993, from 20+ floppies, the access to the source code, the ability to tweak or modify the kernel had me convinced that open source would eventually eclipse closed source.  After running Slackware for a few years, like many early Linux adopters I tried other early distributions like Yggdrasil (Yggdrasil, 2018) and Debian (Debian, 2018). In or around 1998 I read Eric Raymond’s essay, “The Cathedral and the Bazaar” (Raymond & Young, 2001), it was around this time that commercial distributions like RedHat (RedHat, 2018) and Caldera (Caldera OpenLinux, 2018) were beginning to take hold in the enterprise.  During this period, I worked in big pharma, and I had traded shell scripting, sed, and awk for a cross-platform interpreted open source language called Perl developed by a guy named Larry Wall. I can remember how fast we were moving now that we were building web applications with open source tech like Apache, CGI, and Perl. Security was for people who didn’t want to go fast, just hit CPAN, grab the library and go. (Perl, 2018). I highly recommend reading “The Cathedral and the Bazaar”, if not, watch the documentary called “Revolution OS”. (Revolution OS, 2012) IMO Raymond’s essay was on the money, but a little early to the market.  Raymond outlined the open source model perfectly, but we were in the age of the innovation, rapid change and resistance; today the open source, agile and the DevOps movements have allowed Raymond’s vision of the Bazaar to be fully realized, and the benefits to agility and velocity are unparalleled. As we all know from Clayton M. Christensen’s book “The Innovator’s Dilemma” (Christensen, 2016), innovators struggle to retain market leading positions, the open source world has many examples of this, first movers like Slackware and VA Linux (Tozzi, 2016) are today either niche players or gone from the market. I provide this detailed background because IMO the paradigm shifts brought about by the movement from the cathedral (closed source, rigid release cycles, etc.) to the bazaar model (open source, continuous integration, etc.) has some real and some perceived implications on security.

I’d like to point out an observation regarding security and social behavior.  People tend to watch their possessions in a cathedral with less vigilance than they would in a bazaar.  This behavior is human nature; when we feel safe we relax, when we feel unsafe we keep a watchful eye, I believe it’s this human behavior that is very impactful.

No matter how much research you do, the answer is almost always that open source vs. closed source in the context of security is a matter of preference rather than one model being more secure than the other. (Security Showdown: The Open Source vs. Closed Source Debate, 2017) Vulnerabilities exist, and there will always be those who seek to exploit them.  My personal opinion is that OSS (open source software) has a perceived attack surface by the user which is broader than that of closed source software; thus the community is more vigilant. Those who willing adopt OSS know they are moving into a neighborhood with a high crime rate, so they are more likely to lock the door. The alternative opinion is that closed source is less vulnerable because the source code is not “readily” available (Lettice, 2004), but the “security through obscurity” paradigm has been proven to be a poor one.  There are comparable examples of both open source, and closed source exploits such as Heartbleed the OpenSSL vulnerability and WannaCry the ransomware attack that targeted Microsoft Windows users. (Security Showdown: The Open Source vs. Closed Source Debate, 2017) With this said there are not many closed source operating systems or applications which do not contain some piece of open source code.  OpenSSL exists everywhere, and Microsoft Windows has had a package called SFU (Services For Unix) as an operating system option since 1999, today it allows Windows 10 users to run a full Linux distro in user mode on top of the Windows kernel and as we all know Linux is open source. While closed source software is not going away, open source code integrated into closed source by almost every closed source provider today making the perceived closed source controls are just that, perception, not reality.

To close out my thoughts here, open source vs. closed source is merely a matter of preference and perception.  I believe that the danger lies in the perception that closed source is somehow less vulnerable than open source, this perception relaxes the security posture, and the best way to prevent a breach is to be vigilant.  Linux, the open-source operating system which powers greater than sixty-seven percent of the internet along with open source applications like Apache, Nginx, etc. may be the most prominent targets, but they also may be the most well-defended targets. (Open Source vs Closed Source – Which Is More Secure?, 2017) The inability to obscure open source should remove the sense of “security through obscurity” and foster a sense of vigilance, does this always happen, no, but the premise is sound.

References

Caldera OpenLinux. (2018, May 15). Retrieved from https://en.wikipedia.org/wiki/Caldera_OpenLinux

Christensen, C. M., & Christensen, C. M. (2011). The innovator’s dilemma: The revolutionary book that will change the way you do business. Harper Business.

Debian. (2018, May 18). Retrieved from https://en.wikipedia.org/wiki/Debian

Lettice, J. (2004, Feb 13). MS Windows source code escapes onto Internet. Retrieved from https://www.theregister.co.uk/2004/02/13/ms_windows_source_code_escapes/

Open Source vs Closed Source – Which Is More Secure? (2017, June 13). Retrieved from http://www.franklinfitch.com/blog/2017/06/13/open-source-vs-closed-source-secure/

Perl. (2018). Retrieved from https://www.perl.org/

Raymond, E. S. (1999). The cathedral and the bazaar: Musings on Linux and Open Source by an accidental revolutionary. O’Reilly.

Red Hat. (2018, May 17). Retrieved from https://en.wikipedia.org/wiki/Red_Hat

Revolution OS. (2012, January 25). Retrieved from https://youtu.be/jw8K460vx1c

Security Showdown: The Open Source vs. Closed Source Debate. (2017, April 04). Retrieved from https://www.veracode.com/blog/security-showdown-open-source-vs-closed-source-debate

Tozzi, C. (2016, July 29). Open Source History: The Spectacular Rise and Fall of VA Linux. Retrieved from http://www.channelfutures.com/open-source/open-source-history-spectacular-rise-and-fall-va-linux

Yggdrasil. (2018, May 12). Retrieved from https://en.wikipedia.org/wiki/Yggdrasil

 

Discussion Response 1

Nicely done, good read. Open Source can be a confusing topic, even to those who live it daily. The guttural instinct is to assume that open source is free, like “freeware” but this would be incorrect. There is a quote from Richard Stallman the founder of the GNU (GNU’s Not UNIX!) movement that perfectly describes the freedoms of Open Source; the quote reads “Think ‘free speech’, not ‘free beer.'” The challenge with the word “free” is it does not distinguish between “free of charge” and “liberty.” The other things that further complicates open source are the number of license agreements which can be applied to open source works, they differ slightly, and the author has to know what he or she is trying to accomplish when applying these licenses to their work. Popular open source licenses include the GPL (General Public Licenses) for which there are multiple versions and controversy over each (Watch Revolution OSBruce Perens discusses the GPL at length, and Eric Raymond explains the cathedral and the bazaar at length), the MIT license, the Apache license, etc.

I agree that perspective plays a significant role in regards to security and open source vs. closed source. With regards to the attack surface, I think we have to be careful to distinguish vulnerabilities from exploits (i.e. – a piece of malware targeted at a specific vulnerability is written and released into the wild). I like your thought on hackers wanting to disassemble compiled source code to hack it, not sure if they are looking for that kind of challenge, but it’s possible. I think the reality is that today hackers target the user as much as they do the system. When you think about Linux, you think a user that understands the system, unlikely they bought their computer loaded with Debian at Best Buy, this user harder to social engineer and deliver a malicious payload. When you think about the average Windows, sure some people understand the system, but then there are my parents who click on every link they get emailed. Systems like Windows understand their demographics; they attempt to balance security and user experience, but features like “autorun” naturally make these systems more vulnerable.  The user demographics and attack surface (adoption rate, number of versions that can be impacted, etc.) matter.

Like I mentioned above, on a rainy day watch “Revolution OS” and you’ll have a great intro to open source. If you like it, I recommend “The Code: Story of Linux“.

References

AutoRun. (2018, May 10). Retrieved May 20, 2018, from https://en.wikipedia.org/wiki/AutoRun
Bruce Perens. (2018, May 19). Retrieved May 20, 2018, from https://en.wikipedia.org/wiki/Bruce_Perens
Hash, V. (2012, January 25). Revolution OS. Retrieved May 20, 2018, from https://youtu.be/jw8K460vx1c
N, A. (2014, July 23). The Code: Story of Linux documentary (MULTiSUB). Retrieved May 20, 2018, from https://youtu.be/XMm0HsmOTFI
Open Source Licenses & Standards. (n.d.). Retrieved May 20, 2018, from https://opensource.org/licenses
RobinGood. (2006, October 19). Richard Stallman – What is free software? Retrieved May 20, 2018, from https://www.youtube.com/watch?v=uJi2rkHiNqg

 

Discussion Response 2

excellent post, I would like to point out a few thoughts that I think are important aspects of open source. First, remember open source is about freedom and liberties and has nothing to do with dollars and cents. If you were to look at the market today, and all the attributed open source software I think you would be surprised by the amount of revenue that is being generated by open source software and its derivatives. It is also important to realize that while the community of open source subject matter experts dwarfs that of closed source, open source has a robust support paradigm. Let’s look at an example; I’ll use Amazon Web Services as a cloud company built almost entirely on open source. Let’s look at a prominent AWS’ service like EC2 (Elastic Cloud Compute) which is built using Linux and the Xen hypervisor, both open source projects. EC2 is just one of the dozens of AWS services built using open source, that is packaged and delivered to customers with support in a business model (the cloud) that will drive north of 20 billion in revenue in 2018. How about Nvidia and the machine learning craze? Nvidia has been a GPU (Graphics Processing Unit) leader for years, their primary customers were gamers, but the use of GPUs for AI, machine learning, and cryptocurrency mining has propelled Nvidia to new heights. Nvidia capitalized on the machine learning craze and their hardware platform by packaging their hardware with open source software; they called this the DGX-1, a turnkey platform for machine learning. What is the secret to the DGX-1? It’s packaged open source. The challenge with open source, especially in complex applications like machine learning is compatibility, what version of Nvidia CUDA code do I need to pair with my required version of TensorFlowMXNet, etc., etc. Those who don’t need commercial support, like me, build systems that closely parallel what Nvidia did in the DGX-1, and we will turn to the community for help (e.g., GitterStackOverflow), an example of a packaged machine learning system is Deepo, almost identical to how the DGX-1 is constructed. For the average enterprise where the tech is context, they may prefer to turn to Nvidia for support. Does AWS buy open source support, the answer is no, they employ people capable of debugging the source code and self-support; alternatively the Kalamazoo Credit Union may have a machine learning project, but they don’t want to be debugging the framework source code, they are likely to purchase a Nvidia DGX-1.

I don’t think I can agree with the open source training and usability hypothesis. Conduct a Google search for “learn R”, then conduct one for “learn Matlab” and see if you see a difference in the number of resources for R (open source) vs. Matlab (closed source).

On the topic of security, this is a pretty close to a religious argument, what I believe is that the weakest link in the system is the user. I also think that there is a link between the user and exploitation. All systems have vulnerabilities, the Linux kernel has more vulnerabilities than the Windows 10 by nearly a factor of 2x, but if you leave the door open and no one robs you there is an unrealized impact. Windows is a target because there is social engineering required to deliver a malicious payload, the link between the user, system usability and ability to exploit a vulnerability is a subjective measure (because I have not done the research), but I believe empirical data would support it.

References

Amazon EC2. (n.d.). Retrieved May 20, 2018, from https://aws.amazon.com/ec2/

CUDA Zone. (2017, September 30). Retrieved May 20, 2018, from https://developer.nvidia.com/cuda-zone

Dignan, L. (2018, May 17). Nvidia continues to ride AI, gaming, machine learning, crypto waves. Retrieved May 20, 2018, from https://www.zdnet.com/article/nvidia-continues-to-ride-ai-gaming-machine-learning-crypto-waves/

Gitter. (n.d.). Retrieved May 20, 2018, from https://gitter.im/

MXNet: A Scalable Deep Learning Framework. (n.d.). Retrieved May 20, 2018, from https://mxnet.incubator.apache.org/

NVIDIA DGX-1: Essential Instrument of AI Research. (n.d.). Retrieved May 20, 2018, from https://www.nvidia.com/en-us/data-center/dgx-1/

TensorFlow. (n.d.). Retrieved May 20, 2018, from https://www.tensorflow.org/

The Linux Kernel documentation. (n.d.). Retrieved May 20, 2018, from https://www.kernel.org/doc/html/latest/

Top 50 Products By Total Number Of “Distinct” Vulnerabilities in 2017. (n.d.). Retrieved May 20, 2018, from https://www.cvedetails.com/top-50-products.php?year=2017

Ufoym. (n.d.). Ufoym/deepo. Retrieved May 20, 2018, from https://github.com/ufoym/deepo

Where Developers Learn, Share, & Build Careers. (n.d.). Retrieved May 20, 2018, from https://stackoverflow.com/

Xen Project. (n.d.). Retrieved May 20, 2018, from https://www.xenproject.org/

 

Discussion Response 3

I think you hit on an excellent point here with the IKEA furniture analogy.
IKEA produces closed source furniture that requires assembly, and they provide subpar documentation. It’s been a while since I bought something from IKEA (kids not off to college yet), but given the price point, I can only imagine what the dial-in support experience is.
Let’s contrast this with the Norm Abrams and the New Yankee Workshop, what I would consider open source furniture. Norm delivers high-quality plans to a consumer who possesses a certain skill level, is willing and capable of reading the plans, acquiring the raw material, etc. If you are this individual, you get a higher quality deliverable, but it requires a generally higher level of skill as a starting point. If you don’t possess this starting level of expertise, you might lose a finger. Many people will buy from IKEA because they are afraid of losing a finger.
Microsoft is to IKEA what Linux Torvalds is to Norm Abrams, closed source vs. open source in the context of self-assembled furniture; I love it!
As a developer I read release notes, I make sure a patch won’t render a library I am using inoperable, well actually not so much anymore because I pretty much microservice everything and use containers to avoid this dependency pitfall, but the anecdote serves a purpose. Your wife is a smart Windows user, she’s the anomaly though, kudos to here for developing here own test and QA department :), the reality is most windows users upgrade with no idea what is happening, then they scramble when something stops working.

Enough has been said on the religious argument of the security of open source vs. closed source so I will leave this alone at this point. 🙂

Thanks for the IKEA idea, I will definitely be using it in the future! 🙂

 

Essay Assignment

Write an essay contrasting the security models of Linux, iOS, and Windows. Which is more secure and why?

[google-drive-embed url=”https://docs.google.com/document/d/1HscxhFiehMi4O6j8gJ5GT1iHqdPSSmzz-djk1HbmKiU/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5156 – Week 3 – Assignment 1″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

 

OS Security Module Assignment

[google-drive-embed url=”https://docs.google.com/document/d/1e2eyOxUXjdiU5pzL4-uKFTrljW4iyzcXIUwY9dURdrk/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5156 – Week 3 – Assignment 2″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]