Richard J. Bocchinfuso

"Be yourself; everyone else is already taken." – Oscar Wilde

FIT – MGT5157 – Week 7

FIT – MGT5157 – Week 7 – Discussion Post

Discussion: Identify requirements that should be considered when determining the locations and features of firewalls. What are some important steps to take to keep firewalls effective?

In the context of “determining the locations and features of firewalls,” I believe it is critical to understand how infrastructure and traffic patterns are evolving. Firewalls have always been essential in filtering and protecting north-south network traffic. The emergence of technologies like virtualization and software-defined networking (SDN) has dramatically increased east-west network traffic. Like long-range ballistic missiles have impacted aspects of the layer one protection provided by the oceans, these technologies have negated aspects of the physical layer one protection provided by physical network segmentation. Technologies like virtualization and SDN have accelerated the development of next-generation firewalls (NGFW) that deliver a “deep-packet inspection firewall that moves beyond port/protocol inspection and blocking to add application-level inspection, intrusion prevention, and bringing intelligence from outside the firewall.” (Aldorisio, 2017)

Most people are reasonably familiar with perimeter security best practice.
A model that many people are familiar with is the bastion host topology. The bastion host topology would be the type firewall topology deployed on most home networks where the LAN (Intranet) and WAN (Internet) are firewalled by a cable modem which acts as the router and firewall.

A more complex network may utilize a screened subnet topology the implementation of a DMZ (Demilitarized Zone). In the screened subnet topology, systems that host public services are placed on the DMZ subnet rather than on the LAN subnet. The screened subnet topology separates public services from the LAN or trusted subnet by locating publically accessible services in the DMZ. This approach adds a layer of protection so that if a publically available service becomes compromised, there is an added layer of security aimed at stopping an attacker from traversing from the DMZ subnet to the LAN subnet.

A topology which takes the screened subnet a step further is a dual firewall topology where the DMZ (Demilitarized Zone) is placed between two firewalls. The dual firewall topology is a common topology implemented by networking security professional, often using firewalls from different providers as an added layer of protection should an attacker identify and exploit a vulnerability in a vendors software.

Enterprise-grade firewalls also allow for more complex topologies which extend the topologies described above beyond internal (LAN), external (WAN) and DMZ networks. Enterprise-grade firewalls support more interfaces, faster processors which allow more layered intelligent services, higher throughput, etc. The support of software features such as virtual interfaces, VLANs, VLAN tagging, etc. allows for greater network segmentation enabling the ideas discussed above to be applied discretely based on requirements.

Some steps to maintain firewall effectiveness include (Mohan, 2013):

  • Clearly defining a firewall change management plan
  • Test the impact of firewall policy changes
  • Clean up and optimize firewall rule base
  • Schedule regular firewall security audits
  • Monitor user access to firewalls and control who can modify firewall configuration
  • Update firewall software regularly
  • Centralize firewall management for multi-vendor firewalls

References

Aldorisio, J. (2017, November 27). What is a Next Generation Firewall? Learn about the differences between NGFW and traditional firewalls. Retrieved August 17, 2018, from https://digitalguardian.com/blog/what-next-generation-firewall-learn-about-differences-between-ngfw-and-traditional-firewalls

Chapple, M. (2018, August 17). Choosing the right firewall topology: Bastion host, screened subnet or dual firewalls. Retrieved August 17, 2018, from https://searchsecurity.techtarget.com/tip/Choosing-the-right-firewall-topology-Bastion-host-screened-subnet-or-dual-firewalls

Ergun, O. (2015, January 10). What is East-West and North-South Traffic | Datacenter Design. Retrieved August 17, 2018, from https://orhanergun.net/2015/01/east-west-north-south-traffic/

Hossain, M. (2014, May 21). Trends in Data Center Security: Part 1 – Traffic Trends. Retrieved August 17, 2018, from https://blogs.cisco.com/security/trends-in-data-center-security-part-1-traffic-trends

How Does Micro-Segmentation Help Security? Explanation. (n.d.). Retrieved August 17, 2018, from https://www.sdxcentral.com/sdn/network-virtualization/definitions/how-does-micro-segmentation-help-security-explanation/

Mohan, V. (2013). Best Practice for Effective Firewall Management. Retrieved August 17, 2018, from http://cdn.swcdn.net/creative/v9.3/pdf/Whitepapers/Best_Practices_for_Effective_Firewall_Management.pdf

Network and Traffic Segmentation. (n.d.). Retrieved August 17, 2018, from https://www.pluribusnetworks.com/solutions/network-traffic-segmentation/

 

FIT – MGT5157 – Week 7 – Discussion Response 1

One-off post because Defcon (Links to an external site.)Links to an external site. is happening in Las Vegas, if you wanna see what you’re trying to protect against I suggest the following week’s activities. 🙂  https://twitter.com/hashtag/defcon?src=hash (Links to an external site.)Links to an external site.

From solving fizzbuzz with TensorFlow (Links to an external site.)Links to an external site. to curing cancer (Links to an external site.)Links to an external site. and everything in between machine learning is changing how we programmatically solve problems, no longer focusing on loops, conditionals, and functions to solve a finite problem, but rather using training data and machine learning to teach the computer how to solve problems even if the inputs change from what is expected.  Essentially we are using training data to teach the computer to reason, we call this inference.  Solving the fizzbuzz problem with TensorFlow is a great example of how machine learning can be used to solve a simple problem.

If you are not familiar with fizzbuzz, it’s a common programmer interview questions.

Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.

A solution written in python might look like this:

Above you can see the python code solves the problem as presented, but I would have to alter the program to do the same things for a dataset from 101 to 1000. The ridiculous example of using TensorFlow to solve fizzbuzz is the work of Joel Grus and he wrote a hilarious blog (Links to an external site.)Links to an external site. on it. Even though it is a ridiculously complex solution to the problem, and it yields the wrong answer it is a great simple exercise to demonstrate the value of a neural network.

Maybe Elon Musk’s warning that AI could become “an immortal dictator from which we would never escape” is exaggerated for effect and Twitter fame, but it seems that AI will clearly be a strong field general with supreme control over the chosen battlefield.  It’s about more than autonomous machines, it’s about autonomous everything, it’s about not solving fizzbuzz with loops and conditional statements, but rather by building a neural network that can solve any variation of fizzbuzz.  It’s not about using malware signatures and firewall rules which statically protects north-south and east-west traffic or stateful packet inspection which requires a known signature but rather building a neural network that can continuously train and continuously improve protections, bad news, the hacker community is leveraging machine learning, deep learning and AI to find and exploit vulnerabilities.  It’s an arms race and both sides have fully operational uranium enrichment plants, we’ll call them TensorFlow, MXNet, Pytorch and a seemingly endless supply of uranium which we’ll call cloud GPUs. 🙂  Cisco calls this “The Network. Intuitive.” I only use Cisco as an example because they made a fancy commercial that that dramatizes the uses of Machine Learning, Deep Learning and Artificial Intelligence to build what they call “The Network. Intuitive.”  Oh, and who doesn’t love Tyrion Lannister?

 

FIT – MGT5157 – Week 7 – Discussion Response 2

Scott, good post.  Question: Do you think that it will be possible to compete in the enterprise NGFW market without a cloud-based model?  My contention is that the aggregation and profiling of data gathered from deep packet inspection across the entire the industry will allow NGFW OEMs to better identify and address threats.  These datasets will also function as training data for machine learning, deep learning and AI models.  My belief is that the cloud is and will continue to play a huge role in the innovation and adoption of NGFW technologies.

FIT – MGT5157 – Week 6

FIT – MGT5157 – Week 6 – Discussion Post

Discussion: Describe the basis for effective collaboration of security defenses within and between organizations.

This is an interesting question. I think ten years ago effective collaboration of security defenses within and between organizations would be highly dependent on effective open communication between these organizations. Today I think the effective collaboration of security defenses is being aided by two core technology shifts:

  1. Cloud
  2. Machine Learning, Deep Learning, AI

Let’s start with the cloud. Today’s security providers are increasingly becoming cloud-enabled, they are relying on the aggregation of massive data sets (big data) for heuristics on massive compute farms that far surpass what is possible in a heuristics engine on a laptop, desktop or mobile device. Just about every security technology provider is leveraging the cloud and vast resources it provides. When organizations buy into the cloud-based security paradigms it is the equivalent of sharing and communicating information, but this information is now being aggregated, anonymized, analyzed and cross-referenced in real-time.  (Quora Contributor, 2018)

Machine learning, deep learning, and AI are not just buzzwords, they are technologies that harness data and continuously train models that can begin to see things which are not visible to the naked eye. These technologies are greatly altering how we think about security. Security providers like AlertLogic (Links to an external site.)Links to an external site.Secureworks (Links to an external site.)Links to an external site.and many others that focus on IPS/IDS and incident responses models that leverage data which is anonymized, but aggregated and analyzed across their entire customer base, this has tremendous value. Security providers like Tanium (Links to an external site.)Links to an external site. and Panda Security (Links to an external site.)Links to an external site. and others who focus on end-point security also use cloud technologies, big data and machine learning to provide superior heuristics. For example, the embedded anti-malware in Windows 10 makes use of “cloud-based protection” to better protect users, users are opted-in to collaborating and opting-out requires the user intervention that is buried in the bowels of the operating system and anti-malware (Windows Defender) configuration settings.

Collaboration and engagement require a focus on Human-Computer Interaction (HCI) to drive system usability and adoption, this is especially true in the field of security. Users vary and they have different expectations of the systems they interact with, a simple blacklist of whitelist approach no longer gets the job done, these approaches slow productivity and encourage working around the system. (Coursera, 2018)

Intelligent security systems which leverage AI may be able to adapt security protocols based on user usage profiles. For example, what users took the lollipop and what users didn’t and should how security is enforced for these two user types differ? (DreamHost, 2018)

To close out my thoughts this week, I will end with an example of a security problem that is not a platform problem, but rather a use problem, as is often the case. For those of us who have used Amazon (AWS) S3, the AWS object storage servicer we know that AWS offers extremely fine-grained ACLs for S3 buckets, the security paradigm is quite robust and defaults to no-access, but this robustness and fine-grained programmatic and composable infrastructure comes with complexity (Amazon, 2018), complexity leads to usability challenges which leads us to exposing data which is not intended to be exposed. This week that victim was GoDaddy who exposed an S3 bucket containing configuration data for tens of thousands of systems, as well as sensitive pricing information, apropos given our collective conversations last week regarding GoDaddy and DNS registrars.  (Chickowski, 2018)

With > 80% of all corporations experiencing a hack of some sort, exploitation is on the rise and there is no end in sight. (Lipka, 2015) As we continue towards a public cloud world, platforms are providing more choice, easier access, and the ability to be agiler, build faster and come to market faster but we’ve lost the simplistic nature of layer 1 security. We have to have security systems that live at a layer above layer 1 human interaction, and communication. I believe that Progress will depend on the ability of the security systems of today and tomorrow to facilitate zero touch collaboration in an automate and secure way.

References

Amazon. (2018, August 10). Bucket Policy Examples. Retrieved August 10, 2018, from https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html

Chickowski, E. (2018, August 9). AWS Employee Flub Exposes S3 Bucket Containing GoDaddy Server Configuration and Pricing Models. Retrieved August 10, 2018, from https://www.darkreading.com/attacks-breaches/aws-employee-flub-exposes-s3-bucket-containing-godaddy-server-configuration-and-pricing-models/d/d-id/1332525

Coursera. (2018, August 10). Usable Security. Retrieved August 10, 2018, from https://www.coursera.org/lecture/usable-security/course-intro-60olh

DreamHost. (2018, January 30). Take This Lollipop… I Dare You! Retrieved August 10, 2018, from https://www.dreamhost.com/blog/take-this-lollipop-i-dare-you/

ElsonCabral. (2011, October 26). Take This Lollipop. Retrieved August 10, 2018, from https://www.youtube.com/watch?v=pbQm-nIMo_A

Lipka, M. (2015, June 05). Percentage of companies that report systems hacked. Retrieved August 10, 2018, from https://www.cbsnews.com/news/percentage-of-companies-that-report-systems-hacked/

Quora Contributor. (2018, February 15). How Will Artificial Intelligence And Machine Learning Impact Cyber Security? Retrieved August 10, 2018, from https://www.forbes.com/sites/quora/2018/02/15/how-will-artificial-intelligence-and-machine-learning-impact-cyber-security/#34f878166147

 

FIT – MGT5157 – Week 6 Discussion Response 1

James, I would go as far as to say unless mandated by a regulatory requirement very few enterprises are advertising breaches and even when mandated by regulatory bodies they are pushing the boundaries of the disclosure.  For example, Equifax took six weeks to disclose the hack, not the only major enterprise in a regulated industry looking to delay disclosure.   The bigger the organization the more sensitive the data the tighter and more broad sweeping the NDAs.  Ed Snowden’s are not falling out of trees and the number of statistical breaches, when contrasted with the number of reported breachs, say there is more interest in obfuscation than there is in disclosure.  Sure, the OTR conversations can happen at an InfoSec meetup, but the bigger the enterprise the more isolated and focused exposure is becoming, with access to systems, processes, conversations, etc. becoming so tightly governed that it’s getting harder and harder to assemble a full picture of a situation. Those who do have the complete picture don’t attend InfoSec meetups, they are busy having dinner at Le Bernardin. 🙂

I think it’s a fair assumption to assume we know only a small fraction of what’s happening and that the preponderance of the most diabolical stuff never makes it into the mainstream.  As technology becomes a profit center for every company, we will see more and more of this.  The days of we are a manufacturing company and tech is a cost center are over, big data, analytics, and machine learning are driving every industry, with the CMO spending more on technology than the CIO.

Not saying we shouldn’t keep trying, but I believe we will see significant innovations that will change the game, relying less on the good behavior of people and more on the machine to make and monitor decisions.  Andrew mentioned the Target breach, there is no reason that and PLC network for HVAC controls should have >= layer 2 access to a network for payment processing, IMO layer 1 is even questionable, what should have been disclosed is the name of the network architect who built that infrastructure and everyone who looked at it thereafter and didn’t yell from the rooftop.

 

References

Isidore, C. (2017, September 8). Equifax’s delayed hack disclosure: Did it break the law? Retrieved August 10, 2018, from https://money.cnn.com/2017/09/08/technology/equifax-hack-disclosure/

McLellan, L. (n.d.). By 2017 the CMO will Spend More on IT Than the CIO. Retrieved August 10, 2018, from https://www.gartner.com/webinar/1871515

 

FIT – MGT5157 – Week 6 Discussion Response 2

Andrew, let’s assume that an organization or organizations have a well designed and implemented network infrastructure using platforms from providers like Cisco, Juniper, Palo Alto, etc.

Image result for good private spine leaf design principles

Organizations acting together (e.g. supplier and buyers in a supply chain system), can secure their data exchange on encrypted channels, they can use multi-factor authentication, they can use Geo-fencing, they can use certificate-based PKI Smart Cards, but what if the exploit resides in the router or firewall code?  What if there is an APT (Advances Persistent Thread) against organization X which exploits some vulnerability in the router or firewall code?  When organization X identifies the breach, do they communicate that they have been breached?  If so to whom?  While agreeing that open communication is key to slowing the bad guys, reducing the blast radius, etc. I also believe there are few organizations willing to volunteer that they have been breached, this is especially true if the breach has to do with human error, which they so often do.  The reports we see are typically driven by watchdog groups, like the recent GoDaddy breach (Links to an external site.)Links to an external site.; by a regulatory requirement to disclose like the Target (Links to an external site.)Links to an external site. or Equifax (Links to an external site.)Links to an external site. breach; by a catastrophe like the CodeSpaces (Links to an external site.)Links to an external site. breach, but it most cases the motivation to disclose is not very strong at all.  I believe the answer resides in anonymizing the breach reports, focusing a little less on corporate accountability and more of getting the data needed to start programmatically plugging the gaps, making the system less punitive and manding more tech to secure the network so the machine may save us from ourselves.  In essence more carrot and less stick.  For example what if in the case of Target there was stateful packet inspection which saw both PLC data and payment processing data flowing on the same network, and took automated action to segment the traffic, shut the traffic down, etc.  Sure these technologies will get hacked as well, but people are inherently poor binary decision makers and I think we will a different paradigm emerge.  I think we are seeing it already.

 

FIT – MGT5157 – Week 6 Discussion Response 3

Scott, enjoyed the post.  I think this is my first comment on one of your post in this class.  I like Canvas 1000x better than the old LMS but it feels like this class has more students or something because the discussion threads are long.  Anyway, I have a few friends who work for FireEye, they are 100% focused on APTs (Advanced Persistent Threats) and what they will say is that FireEye focuses on four things: Prevent, Detect, Contain, Resolve. While prevention and detection are important with APTs the bad guys will typically find a way in so they put a heavy focus on containment.  What containment is about is about is not letting the bad guys leave one they are in.  I always think about the bar scene from the movie “A Bronx Tale”.  The bad guys walk it the bar, but then they are contained. 🙂

Stating to see more and more focus on preventing data exfiltration (DLP).

FIT – MGT5157 – Week 5

FIT – MGT5157 – Week 5 – Discussion Post

Discussion: What is the market for DNS control? Who are the big players in managing domain names? Can domain names be exploited?

The market for DNS control is competitive. There is more to DNS control than just owning the DNS resolution, companies like GoDaddy (Links to an external site.)Links to an external site. are domain registrars, but they also provide services which leverage those domains, services like web hosting and email.  Organizations like GoDaddy started as registrars and grew into internet service providers, the same is true of organizations like AWS who started as service providers and saw an opportunity to be the domain registrar so AWS started a service called Route53 (cool name because port 53 is the port that DNS runs on).

Domain names are controlled by ICANN (Links to an external site.)Links to an external site. (Internet Corporation for Assigned Names and Numbers). ICANN is a non-profit organization that acts as the governing body tracking domain names maintained by domain name registrars like GoDaddy and NameCheap. The ICANN database master domain name database can be queried using “whois”.

Authoritative DNS root servers are controlled by only a few key players, these hostnames actually point to an elaborate network or DNS servers around the world.

Source:  Iana. (2018, August 3). Root Servers. Retrieved August 3, 2018, from https://www.iana.org/domains/root/servers

It’s not hard to understand why VeriSign is at the top of the list when you understand the relationship between ICANN and VeriSign.  As you look down the list, not surprisingly there is a correlation between the authoritative DNS root servers and Class A address ownership. WIth the DoD owning 12 Class A addresses you would imagine they would have an authoritative root DNS server.

Source:  Pingdom. (2008, February 13). Where did all the IP numbers go? The US Department of Defense has them. Retrieved August 3, 2018, from https://royal.pingdom.com/2008/02/13/where-did-all-the-ip-numbers-go-the-us-department-of-defense-has-them/

 

Querying the ICANN database for s specific domain name will return relevant information about the domain name as well as the registrar.

Above we can see that a “whois bocchinfuso.net” reveal the registrar as NameCheap, NameCheap IANA ID, etc…

Each domain registrar is assigned a registrar IANA (Internet Assigned Numbers Authority) ID by ICANN.

DomainState (Links to an external site.)Links to an external site. tracks statistics about domain registrars so we can easily see who the major registrars are.

Source:  DomainState. (2018, August 3). Registrar Stats: Top Registrars, TLD Marketshare, Top Registrars by Country. Retrieved August 3, 2018, from https://www.domainstate.com/registrar-stats.html

GoDaddy is ~ 6x larger than the number two registrar. GoDaddy has grown to nearly 60 million registered domains both organically and through acquisition.

Yes, DNS can be exploited. DNS allows attackers to more easily identify their attack vector. DNS servers are able o perform both forward (mapping a DNS name to an IP address) and reverse lookups (mapping an IP address to a DNS name) this allows attackers to open the internet phone book, easily acquire a target and commence an advanced persistent threat (APT).

Domain names are often linked with branding, so once an APT commences against a domain the resident can’t move.  DNS can also play a role in protecting against threats. Services like Quad9 (Links to an external site.)Links to an external site.and OpenDNS (Links to an external site.)Links to an external site. provide DNS resolvers which are security aware. These DNS resolvers block access to malicious domains.

Because DNS names are how we refer to internet properties typosquatting  (Links to an external site.)Links to an external site.is a popular DNS threat. Typosquatting is a practice where someone uses a DNS name that is similar to a popular domain name capturing everyone who typos the popular domain name.

DNS servers are ideal DDoS (Links to an external site.)Links to an external site. attack targets because the inability to resolve DNS addresses has an impact across the entire network.

Registrar of domain hijacking (Links to an external site.)Links to an external site. is when the attacker gains access to your domain by exploiting the registrar. Once the attacker has access to the domain records they can do anything from changing the A record to a new location to transferring the domain to a new owner. There are safeguards that can be put in place to protect unauthorized transfers, but someone gaining access to your registrar is not a good situation.

DNS is massive directory and to decrease latency DNS caches are placed strategically around the Internet. These caches can be compromised by an attacker and resolved names may take an unsuspecting user to a malicious website. This is called DNS spoofing or cache poisoning. (Links to an external site.)Links to an external site.

These are just a few DNS attack vectors, there are plenty of others. The convenience of DNS is also what creates the risk. DNS makes it easy for us to find our favorite web properties like netfix.com, but it also makes it easy for an attacker to find netflix.com.

 

References

DomainState. (2018, August 3). Registrar Stats: Top Registrars, TLD Marketshare, Top Registrars by Country. Retrieved August 3, 2018, from https://www.domainstate.com/registrar-stats.html

Iana. (2018, August 3). Root Servers. Retrieved August 3, 2018, from https://www.iana.org/domains/root/servers

ICANN. (2018, August 3). ICANN64 Fellowship Application Round Now Open. Retrieved August 3, 2018, from https://www.icann.org/

Mohan, R. (2011, October 5). Five DNS Threats You Should Protect Against. Retrieved August 3, 2018, from https://www.securityweek.com/five-dns-threats-you-should-protect-against

Pingdom. (2008, February 13). Where did all the IP numbers go? The US Department of Defense has them. Retrieved August 3, 2018, from https://royal.pingdom.com/2008/02/13/where-did-all-the-ip-numbers-go-the-us-department-of-defense-has-them/

 

FIT – MGT5157 – Week 5 – Discussion Response 1

Carmeshia, I enjoyed your post. You bring up an interesting point regarding centralization, control, and exploitation. What do you think is more secure, a centralized or decentralized DNS registrar system?

With the increase in APTs (advanced persistent threats) I tend to favor decentralization, but everyone has a perspective, interested in hearing yours.

 

FIT – MGT5157 – Week 5 – Discussion Response 2

Nawar, good post, I enjoyed reading it. While DNS is not a security-centric protocol, few protocols are. The network’s reliance on DNS is both a good and bad thing. Because DNS name resolution is such a critical network function, it is the target of attacks like DDoS attacks because the blast radius of an attack on DNS is significant. With this said the essential nature of DNS also has many focused on protecting and mitigating risk. Services like Cloudflare (Links to an external site.)Links to an external site.Akamai (Links to an external site.)Links to an external site.Imperva Incapsula (Links to an external site.)Links to an external site.Project Shield (Links to an external site.)Links to an external site. and others have built robust Anti-DDoS system to identify and shed DDoS traffic.

 

FIT – MGT5157 – Week 5 – Discussion Response 3

Sharing some pretty interesting data when comparing the top DNS providers.

https://www.datanyze.com/market-share/dns/Datanyze%20Universe/ (Links to an external site.)Links to an external site.

When you start to segment domains by Alexa rank (Links to an external site.)Links to an external site. GoDaddy gets outranked by Cloudflare, Amazon Route 53, Akamai, and Google DNS pretty consistently.

dns market share

Some good detail on why in this article:  https://stratusly.com/best-dns-hosting-cloudflare-dns-vs-dyn-vs-route-53-vs-dns-made-easy-vs-google-cloud-dns/ (Links to an external site.)Links to an external site.

The moral of the story here is that while GoDaddy appears to the Goliath, they are in terms of domain name registration volume, but the FANG (Facebook, Apple, Netflix, Google) type companies (Links to an external site.)Links to an external site.own the internet traffic the volume DNS registration game is becoming a commodity.  GoDaddy has the first mover advantage but competitors like namecheap.net (Links to an external site.)Links to an external site. and name.com (Links to an external site.)Links to an external site. are coming after them.  With Netflix accounting for nearly 40% of all internet traffic (Links to an external site.)Links to an external site., the FANG companies matter, and I don’t think the Cloudflare’s, Akamai’s, Amazon Route 53’s of the world want to chase the GoDaddy subscriber base.

FIT – MGT5157 – Week 3

FIT – MGT5157 – Week 3 Discussion Post

Discussion: Describe the differences between IPv6 and IPv4. What implications does it have on networks? On the user? What could be done to speed up the transition process?

First let’s talk about a major catalyst for the development and adoption of IPv6, the idea that the internet would exhaust the available IP address space. This prediction was made back in 2011 and it was stated the Internet would exhaust all available IP addresses by 4 AM on February 2, 2011. (Kessler, 2011) Here we are 2725 days later and the “IPcalypse” or “ARPAgeddon” has yet to happen, in-fact @IPv4Countdown (Links to an external site.)Links to an external site. is still foreshadowing the IPv4 doomsday scenarios via twitter. So what is the deal? Well, it’s true the available IPv4 address space is limited and with a pool of addresses of slightly less than 4.3 billion (2^32, more on this later) (Links to an external site.)Links to an external site.. It is important to remember that many of these predictions predate Al Gore taking credit for creating the internet. Sorry Bob Kahn and Vint Cert (Links to an external site.)Links to an external site., it was Al Gore who made this happen.

Back in the 1990s we didn’t have visibility to technologies like CIDR (Classless Interdomain Routing) (Links to an external site.)Links to an external site. and NAT (Network Address Translation) (Links to an external site.)Links to an external site.. In addition many us today use techniques like reverse proxying and proxy ARPing. Simplistically this allows something like NGINX (Links to an external site.)Links to an external site. to act as a proxy (middleman) where all services can be placed on a single port behind a single public IP address and traffic can be appropriately routed and proxied using a single public IP address.

For example, a snippet of an NGINX reverse proxy config might look something like this:

server {
    listen 80;
    server_name site.foo.com;
    location / {
        access_log on;
        client_max_body_size 500 M;
        proxy_pass http: //INTERNAL_HOSTNAME_OR_IP;
            proxy_set_header X - Real - IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X - Forwarded - For $proxy_add_x_forwarded_for;
    }
}

server {
    listen 80;
    server_name site.bar.com;
    location / {
        access_log on;
        client_max_body_size 500 M;
        proxy_pass http: //INTERNAL_HOSTNAME_OR_IP;
            proxy_set_header X - Real - IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X - Forwarded - For $proxy_add_x_forwarded_for;
    }
}

Let’s assume that there are two DNS A records (Links to an external site.)Links to an external site., one for site.bar.com and one for site.foo.bar that both point to the same IP address with a web server running on port 80 on both machines.  How does site.bar.com know to go to web server A and site.foo.bar know to go to web server B? The answer is a reverse proxy which can proxy the request, this is what we see above.

I use this configuration from two sites which I host bocchinfuso.net and gotitsolutions.org

A dig (domain information groper) of both of these domains reveals that their A records point to the same IP address, the NGINX (Links to an external site.)Links to an external site. reverse proxy does the work to route to the proper server or services based on the requested server name and proxies the traffic back to the client. nslookup would work as well if you would like to try but dig a little cleaner display for posting below.

$ dig bocchinfuso.net A +short
173.63.111.136
$ dig gotitsolutions.org A +short
173.63.111.136

NGINX (Links to an external site.)Links to an external site. is a popular web server, which can also be used for reverse proxying like I am using it above, as well as load-balancing.

IPv6 (Internet Protocol version 6) is the next generation or successor to IPv4 (Intenet Protocol version 4). IPv4 is responsible for assigning a numerical address using four octets which are each 8-bits to comprise a 32-bit address. IPv4 addresses are comprised of 4 numbers between 0 and 255.

Source:  ZeusDB. (2015, July 30). Understanding IP Addresses – IPv4 vs IPv6.

IPv6 addresses consist of eight x 16-bit segments to comprise a 128-bit address, giving IPv6 a total address space of 2^128 (~ 340.3 undecillion) (Links to an external site.)Links to an external site. which is a pretty big address space. To put 2^128 into perspective it is enough available IP address space for every person on the planet to personally have 2^95 or about 39.6 octillion IP addresses (Links to an external site.)Links to an external site.. That’s a lot of IP address space.

Source:  ZeusDB. (2015, July 30). Understanding IP Addresses – IPv4 vs IPv6.

One of the challenges with IPv6 is that it is not easily interchangeable with IPv4, this has slowed adoption and with the use of proxy, tunneling, etc. technology I believe the sense of urgency is not what it once was. IPv6 adoption has been slow, but with the rapid adoption of IoT and the number of devices being brought online we could begin to see a significant increase in the IPv6 adoption rate. In 2002 Cisco forecasted that IPv6 would be fully adopted by 2007.

Source:  Pingdom. (2009, March 06). A crisis in the making: Only 4% of the Internet supports IPv6.

The Internet Society State of IPv6 Deployment 2017 paper states that ~ 9 million domains and 23% of networks are advertising IPv6 connectivity. When we look at the adoption of IPv6 I think this table does a nice job outlining the where IPv4 and IPv6 sit relative to each other.

Source:  Internet Society. (2017, May 25). State of IPv6 Deployment 2017.

The move to IPv6 will be nearly invisible from a user perspective, our carriers (cable modems, cellular devices, etc…) abstract us from the underpinnings of how things work. Our request to google.com will magically resolve to an IPv6 address vs an IPv4 address and it won’t matter to the user.

For example here is a dig of google.com to return google[dot]com’s IPv4 and IPv6 address.

$ dig google.com A google.com AAAA +short
172.217.3.46
2607:f8b0:4004:80e::200e

Note: If you’re a Linux user you know how to use dig, MacOS should have dig and if you’re on Windows and don’t already know how to get access to dig the easier path can be found here: https://www.danesparza.net/2011/05/using-the-dig-dns-tool-on-windows-7/ (Links to an external site.)Links to an external site.

The adoption rate if IPv^ could be increased by simplifying interoperability between IPv4 and IPv6. The exhaustion of the IPv4 address space and the exponential increase in connected devices is upon us and this may be the catalyst the industry needs to simplify interoperability and speed adoption.

With the above said, interestingly IPv6 adoption is slowing.

McCarthy, K. (2018, May 22). IPv6 growth is slowing and no one knows why. Let’s see if El Reg can address what’s going on.

I think it’s a chicken or the egg situation.  There have been IPv4 address space concerns for years, the heavy lift required to adopt IPv6 led to slow and low adoption rates which pushed innovation in a different direction. With the use of a reverse proxy maybe I don’t need any more public address space, etc… Only time will tell, but this is foundational infrastructure akin to the interstate highway system, change will be a long journey and it’s possible we will start to build new infrastructure before we ever reach the destination.

 

References

Hogg, S. (2015, September 22). ARIN Finally Runs Out of IPv4 Addresses. Retrieved July 20, 2018, from https://www.networkworld.com/article/2985340/ipv6/arin-finally-runs-out-of-ipv4-addresses.html

Internet Society. (2017, May 25). State of IPv6 Deployment 2017. Retrieved July 20, 2018, from https://www.internetsociety.org/resources/doc/2017/state-of-ipv6-deployment-2017/

Kessler, S. (2011, January 22). The Internet Is Running Out of Space…Kind Of. Retrieved July 20, 2018, from https://mashable.com/2011/01/22/the-internet-is-running-out-of-space-kind-of/#49ZaFObrqPqW

McCarthy, K. (2018, May 22). IPv6 growth is slowing and no one knows why. Let’s see if El Reg can address what’s going on. Retrieved July 20, 2018, from https://www.theregister.co.uk/2018/05/21/ipv6_growth_is_slowing_and_no_one_knows_why/

NGINX. (2018, July 20). High Performance Load Balancer, Web Server, & Reverse Proxy. Retrieved July 20, 2018, from https://www.nginx.com/

Pingdom. (2009, March 06). A crisis in the making: Only 4% of the Internet supports IPv6. Retrieved July 20, 2018, from https://royal.pingdom.com/2009/03/06/a-crisis-in-the-making-only-4-of-the-internet-supports-ipv6/

Pingdom. (2017, August 22). Tongue twister: The number of possible IPv6 addresses read out loud. Retrieved July 20, 2018, from https://royal.pingdom.com/2009/05/26/the-number-of-possible-ipv6-addresses-read-out-loud/

Wigmore, I. (2009, January 14). IPv6 addresses – how many is that in numbers? Retrieved July 20, 2018, from https://itknowledgeexchange.techtarget.com/whatis/ipv6-addresses-how-many-is-that-in-numbers/

ZeusDB. (2015, July 30). Understanding IP Addresses – IPv4 vs IPv6. Retrieved July 20, 2018, from https://www.zeusdb.com/blog/understanding-ip-addresses-ipv4-vs-ipv6/

 

FIT – MGT5157 – Week 3 Discussion Response 1

James, love the profile pic.  Setting a hard date to kill IPv4 is a stick no carrot.  The IPv6 shift discussion needs to be driven by the market makers, they should make it compelling enough for enterprises to begin moving faster.  The market makers can make a huge impact, Netflix accounts for > 1/3 of all internet traffic (Links to an external site.)Links to an external site., people a rushing to AWS, Azure and GCP at alarming rates and the only procurers of tech that really matter are Amazon, Apple, Facebook, Alphabet, Microsoft, Tencent and Alibaba.  If the market makers move everyone else will follow, they will have no choice.  Why aren’t they moving faster?

This is further compounded by the fact that Cisco, Juniper, Arista or any other mainstream networking equipment provider are not mentioned above.  It’s no secret that Amazon, Facebook, and others are running their own intellectual property to solve lots of legacy networking issues.  Facebook is building and deploying their own switches and load balancers (Links to an external site.)Links to an external site. and AWS wrote their own networking stack because VPC needs could not be handled by traditional networking provider VLANs and overlay networks.  Now we are seeing the adoption of SDN (Links to an external site.)Links to an external site. increase which could speed up IPv6 adoption of could slow it down.

 

FIT – MGT5157 – Week 3 Discussion Response 2

Yacine, NAT certainly has helped ease the IPv4 address space issue, as did other things like proxy ARPing (Links to an external site.)Links to an external site. and reverse proxying (Links to an external site.)Links to an external site., all techniques to use less address space (also pretty important for network security).

arping can be a handy little tool to see if you can contact a system and what MAC address it is arping on.

> arp-ping.exe -s 0.0.0.0 192.168.30.15
Reply that B8:CA:3A:D1:7E:AB is 192.168.30.15 in 4.604ms
Reply that B8:CA:3A:D1:7E:AB is 192.168.30.15 in 15.745ms
Reply that B8:CA:3A:D1:7E:AB is 192.168.30.15 in 15.642ms
Reply that B8:CA:3A:D1:7E:AB is 192.168.30.15 in 15.623ms

While IPv6 may provide a ton of IP address space, I don’t think the use of NAT and proxies will change, these techniques are as much about security as they are extending the address space.

 

FIT – MGT5157 – Week 2

FIT – MGT5157 – Week 2 – Discussion Post

What is the reason behind packet loss? What can protocols do to limit packet loss? Are there tools available for providers and consumers that identify the source of packet loss?

What is the reason behind packet loss?

“A primary cause of packet loss is the finite size of the buffer involved”  (Wu & Irwin, 2017, p. 15)

Link congestion can cause packet loss. This is where one of the devices in the packet’s path has no room in the buffer to queue the packet, so the packet has to be discarded. Increasing available bandwidth can be a resolution to link congestion, this allows buffers can empty quicker to reduce or eliminate queuing.  The use of QoS to prioritize traffic like voice and video can lower the probability of a dropped packet for traffic that does not tolerate packet loss and retransmission.

Bandwidth constraints, too much data on too small of a pipe creates congestion and packet loss.

As you can see above congestion is just like this is like a four-lane road merging into one lane road. Packet loss can be an intentional thing where packets are dropped because a rule is in place to drop packets at a certain limit, hosting providers use this to control how customers use available bandwidth. Packet loss can also just occur because of unintentional congestion, where the traffic just exceeds the available bandwidth.

Device performance can also cause packet loss. This occurs in a situation where you may increase the bandwidth of the route the packet will take, but the device (router, switch, firewall, etc…) is not able to handle the load. In this case, a new device is likely required to support the network load.

For example, a Cisco ASA 5505 (Links to an external site.)Links to an external site. is meant to handle 150 Mbps of throughput, if the device is will likely begin to have issues, maybe the CPU of the device can’t process the throughput and then device experiences congestion and begins dropping packets.

Faulty hardware, software or misconfiguration. Issues can occur here from a faulty component, like an SFP (small form-factor pluggable), a cable, a bug in the device software, or a configuration issue like a duplex mismatch, can cause packet loss.

Examples of software issues which have caused packet loss: (Links to an external site.)Links to an external site.

Network attacks like a Denial of Service (DoS) attack can result in packets being dropped because the attack is overwhelming a device with traffic.

What can protocols do to limit packet loss?

TCP (Transmission Control Protocol) is a connection-oriented protocol which is built to detect packet loss and to retransmit data. The protocol itself is built to handle packet loss.

UDP (User Datagram Protocol) is a connectionless protocol that will not detect packet loss and will not retransmit. We see UDP used for streaming connect like stock ticker data, video feeds, etc… UDP is often used in conjunction with multicast where data is transmitted one-to-many or many-to-many. You can probably visualize the use cases here, and how packet loss can impact the user experience. With UDP data is lost rather than the system experiencing slow or less than optimal response times.

Layer 4 Transport Optimizations

  • RIP (Routing Information Protocol) and BGP (Border Gateway Protocol) make routing (pathing) decisions based on paths, policies, and rules.
  • TCP Proxy and TFO (Traffic Flow Optimization)

  • Compression

  • DRE (Data Redundancy Elimination):  A technique used to reduce traffic by removing redundant data transmission.  This can be extremely useful for chatty protocols like CIFS (SMB).

Layer 2 and 3 Optimizations

  • OSPF (Open Shortest Path First) and IS-IS (Intermediate System to Intermediate System) use link state routing (LSR) algorithms to determine the best route (path).
  • EIGRP (Enhanced Interior Gateway Routing Protocol) is an advanced distance-vectoring routing protocol to automate network routing decisions.
  • Network Segmentation and QoS (Quality of Service): Network congestion is a common cause of packet loss, network segmentation and QoS can ensure that the right traffic is given priority on the network while less critical traffic is dropped.

Are there tools available for providers and consumers that identify the source of packet loss?

There are no hard and fast rules for detecting packet loss on a network but there are tools and an approach that can be followed.

Some tools I use for diagnosis and troubleshooting:

 

References

Bocchinfuso, R. (2008, January 15). Fs Cisco Event V6 Rjb. Retrieved July 13, 2018, from https://www.slideshare.net/rbocchinfuso/fs-cisco-event-v6-rjb

Hurley, M. (2015, April 28). 4 Causes of Packet Loss and How to Fix Them. Retrieved July 13, 2018, from https://www.annese.com/blog/what-causes-packet-loss

Packet Loss – What is it, How to Diagnose and Fix It in your Network. (2018, May 01). Retrieved July 13, 2018, from https://www.pcwdld.com/packet-loss

Wu, Chwan-Hwa (John). Irwin, J. David. (2013). Introduction to computer networks and cybersecurity. Hoboken: CRC Press.

 

FIT – MGT5157 – Week 2 – Discussion Response 1

For anyone looking to play with packet sniffing, regardless of the sniffer it is always good to capture a quality workload, be able to modify your lab environment and replay the workload to see what happens.  Windump (tcpdump (Links to an external site.)Links to an external site. for Windows) is great tools to capture traffic to a pcap file, but I would also become familiar with tcpreplay (Links to an external site.)Links to an external site..  You probably want to trade in that Windows box for Linux, my distro of choice for this sort of work is Parrot Security OS (Links to an external site.)Links to an external site..  There is one Windows tool I really like, called NetworkMiner (Links to an external site.)Links to an external site., check it out.  I would also get familiar with GNS3 (Links to an external site.)Links to an external site. and the NETem (Links to an external site.)Links to an external site. appliance.  So many great tools out there but GNS3 is a critical tool for learning.  Capturing a quality workload to a pcap, modifying your lab network with GNS3 and using tcpreplay to replay the workload while observing behavior provides a great way to experiment and see the impact. Looking ahead, GNS3 provides a way to apply the routing and subnetting theory that it looks like we’ll be diving into in week three.

 

FIT – MGT5157 – Week 2 – Discussion Response 2

Andrew, good post.  The only comment I would make is to be careful with using ping as the method to diagnose packet loss, great place to start if the problem is really overt, but often the issues are more complex and dropped ICMP packets can be expected behavior because they typically are prioritized out by QoS.

I typically recommend the use of paping (https://code.google.com/archive/p/paping/ (Links to an external site.)Links to an external site.) or hping3 (https://tools.kali.org/information-gathering/hping3 (Links to an external site.)Links to an external site.) to send a TCP vs ICMP request.

If you are going to use ping I would also suggest increasing the ICMP payload size, assuming the target is not rejecting ICMP requests or dropping them because of a QoS policy.

Lastly, there are lots of hops between your computer and using MTR is a great way to see where packets are being dropped, where the latency is, etc.

 

FIT – MGT5157 – Week 2 – Discussion Response 3

Jonathan, a couple of comments on your post.  While TCP packet loss and dropped packets have the same result, a discarded packet requiring retransmission.  Packet loss has an implied context stating that the discarded packet was unintentional because of the reasons you mention above.  Dropped packets can also be intentional, for example, ICMP (ping) traffic is often traffic deprioritized by QoS so these packets are intentionally dropped so they do not impact higher priority traffic.

 

UDP is a connectionless protocol so there is no ACK from the receiver.  Packets are sent and if they are lost there is no retransmit because there is no way for the protocol to know the packet was not delivered, with UDP data can be lost or delivered out of order.  There are implementations or UDP (e.g. – RUDP) where checks can be added to UDP to increase the reliability of protocol. UDP is often used in conjunction with multicast, if you think about multicast and how TCP and UDP work it becomes obvious why multicast works with a connectionless protocol like UDP and why TCP can only be used in unicast applications.

 

 

FIT – MGT5157 – Week 1

FIT – MGT5157 – Week 1 – Discussion Post

What is the internet2? What implications does it hold to the current internet infrastructure?

Super interesting question because while I may have heard of Internet2 years ago I can’t say I ever really knew what it was. I also think it’s interesting in given Tim Berners-Lee’s recent comments on his regrets (Links to an external site.)Links to an external site. about what he was so pivotal in creating, the Internet.

As I read about Internet2, I can’t help, but to think about how it parallels ARPANET and NSFNET. Rather than trying to create a network and pass the first packets like the ARPANET the Internet2 consortium has the goal of innovating new Internet technologies to meet the demands of the modern connected world.

Leonard Kleinrock does a fabulous job explaining what was the first router, a packet switch built by BBN (Bolt Barneck and Newman) and the first message sent across the Internet (the ARPANET then) between UCLA and SRI (Standford Research Institute). (Kleinrock, 2009)

I also highly recommend a documentary called “Lo and Behold, Reveries of the Connected World” (It is on Netflix).

If you have spare time and want to dig deeper Charles Severance has a great Coursera class called “Internet History, Technology, and Security” (Links to an external site.)Links to an external site. which I also recommend.

The Internet2 is both a research and development initiative, but it is a tangible domestic U.S. nationwide carrier class hybrid optical and/or packet network with the goal of supporting research facilities in their development of advanced Internet applications. (Wu & Irwin, 2013, p. 10)

Funny how similar these maps below look; the parallel between the Internet2 map and the NSFNET map is not a coincidence. The infrastructure required to build these networks is owned by few providers and these organizations invest heavily in lobbyists to block new entrants. It’s a game that undoubtedly slows innovation. Just read about the challenges that Google Fiber (Links to an external site.)Links to an external site. had trying to lay Fiber. (Brodkin, 2017)

The Internet2 backbone.

Source:  Wu, Chwan-Hwa (John); Irwin, J. David. Introduction to Computer Networks and Cybersecurity (Page 11). CRC Press. Kindle Edition.

Source:  Wikipedia. (2018, July 03). National Science Foundation Network. Retrieved July 6, 2018, from https://en.wikipedia.org/wiki/National_Science_Foundation_Network

 

Regarding what implications does Internet2 hold to the current internet infrastructure?  Internet2 seems to be focused on Research and Education, not all that different from the objectives of ARPANET, CSNET, and NSFNET.  Internet2 to is aiming to solve the problems of the modern Internet focused on innovating to enable research and education, these include innovations that aim to increase bandwidth, remove bottlenecks, and enable software-defined networking.

The one thing that concerned me is in my research I did not see the role of commercial partners like Netflix and Google.  This concerns me because we live in a time where these two providers alone are responsible for > 50% of Internet traffic.  This means that massive backbone providers like Level 3 and Cogent are carrying a ton of Netflix and Google (more specifically YouTube) traffic.  Unlike the days of ARPANET, commercial entities have a massive role in the evolution and innovation of the Internet.  While CERN is mentioned, I think we would be remiss in not realizing that there is a migration of data, even in research and education to the cloud, which means that Amazon becomes the carriers customer not the research or education institution.

Internet goliaths like Google, Facebook, Netflix, and Amazon are struggling to buy off-the-shelf infrastructure to support their massive needs.  All of these providers are building infrastructure and in many cases open sourcing the how-to documentation.  There is no doubt that we live in interesting technological times.

For example, here is what NASA JPL (Jet Propulsion Laboratory) did with AWS:

With all that said, implications of Internet2 on the current Internet, not much that I can see.  It would seem to me that Internet2 will need to focus on a niche to even remain relevant.

One final thought. Did the Internet2 consortium have something to do with us moving off that prehistoric LMS we were using, to Canvas, if so, keep up the great work.  The ability to create rich media posts, how revolutionary.  ¯\_(ツ)_/¯

References

Brodkin, J. (2017, November 24). AT&T and Comcast lawsuit has nullified a city’s broadband competition law. Retrieved July 6, 2018, from https://arstechnica.com/tech-policy/2017/11/att-and-comcast-win-lawsuit-they-filed-to-stall-google-fiber-in-nashville/

Brooker, K. (2018, July 02). “I Was Devastated”: The Man Who Created the World Wide Web Has Some Regrets. Retrieved July 6, 2018, from https://www.vanityfair.com/news/2018/07/the-man-who-created-the-world-wide-web-has-some-regrets

Kleinrock, L. (2009, January 13). The first Internet connection, with UCLA’s Leonard Kleinrock. Retrieved July 6, 2018, from https://youtu.be/vuiBTJZfeo8

Techopedia. (2018, July 6). What is Internet2? – Definition from Techopedia. Retrieved July 6, 2018, from https://www.techopedia.com/definition/24955/internet2

Wikipedia. (2018, July 03). National Science Foundation Network. Retrieved July 6, 2018, from https://en.wikipedia.org/wiki/National_Science_Foundation_Network

Wu, Chwan-Hwa (John). Irwin, J. David. (2013). Introduction to computer networks and cybersecurity. Hoboken: CRC Press.

 

FIT – MGT5157 – Week 1 – Discussion Response 1

James, good post and you make some very good points.  Five years ago most enterprises leveraged private MPLS (Multiprotocol Label Switching) networks to build their WAN (Wide Area Network) for things like intranet communication, unified communications, etc…  This reminds me of the Internet2 value proposition.

Source:  Maupin, 2016

Fast forward to today and MPLS is being supplanted at an alarming rate by technologies like SD-WAN (Software Defined WAN).  Proponents of MPLS argue that once your packets hit the public Internet, you will not be able to guarantee low levels of packet loss, latency, and jitter.  Sound familiar to any of the research on this topic?

OK, this might be somewhat true, you can’t guarantee QoS (Quality of Service) on the internet.  But, now let’s pause for a minute and think about the context of how the market is shifting, cloud-based computing has had a major impact on the industry.  Cloud-based communications companies like 8×8 (Links to an external site.)Links to an external site., where the CEO happens to be a Florida Institute of Technology graduate (Links to an external site.)Links to an external site. have challenged these notions and pushed technologies like SD-WAN to address the issues of packet loss, latency, and jitter that make public Internet circuits a problem in certain use cases.

I always ask myself, would Author Rock (Links to an external site.)Links to an external site. put his money here?  Based on what I know about Internet2, at this point, I would say probably not.

References

Maupin, R. (2016, May 24). Have I designed correctly my MPLS network? Retrieved July 6, 2018, from https://networkengineering.stackexchange.com/questions/30673/have-i-desiged-correctly-my-mpls-network

 

FIT – MGT5157 – Week 1 – Discussion Response 2

Hailey, good post, I enjoyed reading it.  I have to say I wonder how relevant a private research and education network can be in today’s age.  The project seems way underfunded to me given the dollars being put into Internet capacity from huge players in the space.  The other thing that makes me wonder if Internet2 is viable is the fact that it is a domestic network living in an increasingly flat world.  Will research and education institutions using Internet2 connectivity be able to ride the network to Microsoft’s submergible data center?

Just don’t know about Internet2.  Information and mission feel a little dated.  100 Gigabit connectivity is everywhere today, these speeds are no longer just for carrier interconnects, they are everywhere in the modern data center.

The private sector is moving pretty fast and they have to innovate for competitive advantage, the amount of cash being dumped into moonshot idea in the private sector is unprecedented which I think creates an even bigger problem for the long-term viability of Internet2.

FIT – MGT5156 – Week 8

Essay Assignment

You are the CISO of a large company. Using your own machine as an example, tell me how you would harden your own machine and how you would harden machines across the company, using ideas garnered from this class.

 

Final Exam

FIT – MGT5156 – Week 7

Discussion Post

Desktop Virtualization
Discuss whether desktop virtualization is a panacea.

No, virtualization (desktop or server) is not a panacea. While complex, attackers can exploit hypervisor technology by virtualizing an operating system and running the malware at a level below the virtualized workloads, at the hypervisor layer. This approach makes the malware very hard to detect and operating system agnostic. (Ford, 2018) This type of malware has become know as a virtual-machine based rootkit (VMBR). A VMBR installs a virtual-machine monitor (VMM) underneath an existing operating system (guest os or virtual machine) and hoists the original operating system into a virtual machine. (King & Chen, 2006)

Virtualization can be very helpful for malware analysis. Virtualization can provide isolation, it can create a trusted monitor so the hypervisor can watch how the system works preventing the hypervisor from being tampered with, and it can allow for rollback or disposable computing which can be very useful for malware testing. (Ford, 2018) While countless benefits are derived from virtualization, the hypervisor is just software, and like any other software, it can have vulnerabilities. If the hypervisor were to be exploited, it could provide an attacker with low-level system access which could have serious, widespread implications. Successful exploitation of the hypervisor would give the attacker full control over everything in the hypervisor environment, all virtual machines, data, etc. (Obasuyi & Sari, 2015)

The “cloud” makes extensive use of virtualization technologies. (Ford, 2018) For example, Amazon Web Services (AWS), is built on the Xen hypervisor. Given the security concerns mentioned above and associated with the hypervisor, you can see the concern given the scale and multi-tenancy of cloud providers. (Vaughan-Nichols, 2015) Let’s face it the cloud is one giant honeypot; it’s hard to say “if” and more likey “when” will a low-level exploit happen in the cloud. Only time will tell.

To bring it back to desktop virtualization I might argue that the security concerns with desktop virtualization exceed the security concerns with server virtualization, for one reason, linked clones. The use of linked clones is quite common in desktop virtualization, but with all virtual desktops sharing common executables and libraries, malware can metastasize with each virtual desktop instantiation, and this would not require a compromised hypervisor, but rather a compromised master image. The other thing which we need to consider is transparent page sharing and the potential manipulation of EXEs and DLLs in memory at the hypervisor level and the impact it could have.

References

Ford, R. (2018, June 11). Virtualization. Retrieved June 11, 2018, from http://learningmodules.bisk.com/play.aspx?xml=L0Zsb3JpZGFUZWNoTUJBL01HVDUxNTYvQ1lCNTI4ME0xMFYxL0RhdGEvbW9kdWxlLnhtbA

King, S. T., & Chen, P. M. (2006). SubVirt: Implementing malware with virtual machines. Paper presented at the , 2006 14 pp.-327. doi:10.1109/SP.2006.38

Obasuyi, G. C., & Sari, A. (2015). Security Challenges of Virtualization Hypervisors in Virtualized Hardware Environment. International Journal of Communications, Network and System Sciences, 08(07), 260-273. doi:10.4236/ijcns.2015.87026

Vaughan-Nichols, S. J. (2015, December 04). Hypervisors: The cloud’s potential security Achilles heel. Retrieved June 13, 2018, from https://www.zdnet.com/article/hypervisors-the-clouds-potential-security-achilles-heel/

 

Discussion Response 1

I enjoyed your post, would like to offer up some food for thought.

There are lot’s of good reasons for Desktop Virtualization, the catalysts that I see typically revolve around centralized command and control, with the desire for centralized command and control often being aided by regulatory and/or compliance requirements. Five or so years ago we were seeing a huge push towards desktop and application virtualization on platforms like Citrix Xen Desktop, Citrix Xen AppA, and VMware View but this trend seems to have slowed, it’s not hard to understand why.

Let’s look at a few of the challenges with desktop virtualization. From a security perspective, you now have the east-west traffic to be concerned, this is the traffic taking place on the same physical hardware, not ingressing or egressing the physical hardware (north-south traffic) so network security protocols don’t really work. This was a general hypervisor problem which has been addressed, but a concern nonetheless. Next, we have the unpredictable performance profile of end-user usage, one user performing an I/O intensive process has the ability to impact all other users on that physical system. Then there is the con of centralization, the risk, a shared component outage has a much larger blast radius. All of these contributing factors make desktop virtualization fairly costly.

New technologies like SasS and browser-based apps, the rich user experience of HTML5, the ease of cross-platform development, the BYOD push, etc… seem to have slowed the desktop virtualization craze. Desktop virtualization is still happening, but it seems to have slowed. I use virtual desktops all the time for remote access or to run thick apps, but the virtual desktop is used more like an application rather than as a day-to-day shell from which I work. IMO as long as there is Java and the umpteen versions of Java, compatibility issues between apps and Java version, etc… we will have a need to use the virtual desktop to solve these issues. VDI also allows us to take think client applications and quickly centralize them, although I know many people who have done this who wish they just did an app rewrite rather than spending the time build VDI.

I agree with the VirtualizedGeek, that DaaS is a better solution than VDI for those of us need a cloud-based Windows desktop. (VirtualizedGeek, 2014) The article is a bit dated and today many of us use AWS Workspaces or another DaaS solution for this very reason. I also agree with Ben Kepes that “Desktop as a Service is last year’s solution to last decade’s problem.” (Kepes, 2014) The bottom line is the move toward mobile and web apps will continue so while VDI may not be dying I don’t expect it to flourish.

References

Kepes, B. (2013, November 06). Death To VDI. Or DaaS. Or Whatever It’s Called This Week. Retrieved June 17, 2018, from https://www.forbes.com/sites/benkepes/2013/11/06/death-to-vdi-or-daas-or-whatever-its-called-this-week/#3e4c3295096a

Rouse, M. (2018, June 17). What is east-west traffic? – Definition from WhatIs.com. Retrieved June 17, 2018, from https://searchsdn.techtarget.com/definition/east-west-traffic

VirtualizedGeek. (2014, February 18). VDI is dying so what now? Retrieved June 17, 2018, from http://www.virtualizedgeek.com/2014/02/vdi-token-ring/

 

Discussion Response 2

Enjoyed the post, great read as usual, always like the emotion in your writing.

I have one rule about technology, it is to never make a technology decision based on “saving money”. When the primary value proposition is “you’ll save money” it almost always tells the story that there is no other value proposition that is meaningful enough to be a motivator. I have yet to meet someone who made the decision to implement VDI for cost savings that are happy they made the decision. I have met those who had to do it for regulatory and compliance purposes who likely spent and continue to spend more on their virtual desktop infrastructure than they would have spent deploying desktops, these folks still may not be happy but they are committed to the technology to solve a business problem that they have yet to find another solution to.

Desktop virtualization has been around for a long time, Citrix the undisputed leader in the space started in 1989 with the development of their protocol called ICA (Independent Computing Architecture). In the late 1990s Citrix release MetaFrame 1.0 to match the release of Microsoft Terminal Server. Citrix capitalized on the weakness of Microsoft RDP protocol and MetaFrame and the ICA protocol became the defacto standard for multi-tenancy at scale. The mainframe and mini-computer world was used to multi-tenancy but Citrix had brought multi-tenancy to the micro-computer and Wintel platform. This market pivot actually has close parallels to the cloud pivot we are seeing in enterprise computing today. In the 90s and early 2000s consumers listened to vendors, today consumers listen to the community, the biggest voices are those consuming the platform at scale, fortunately for Citrix this wasn’t the case as they rose to market prominence. There is no doubt that today Netflix holds as much weight on a new user using AWS as AWS itself, Netflix is the 900-pound consumer gorilla and their lessons learned are consumer lessons, not the lessons of AWS who want you on the platform. The Netflix lessons are extremely relevant to the cloud, but they are also relevant to a move to multi-tenancy in any way, VDI being one example. I think we are quickly moving past the days where “a guy with a huge handlebar mustachio with a cape on the back of a wagon” can espouse a cure-all. And for those willing to buy, well, in today’s day and age it feels more like natural selection than someone being bamboozled.

Here are some of the publically available Netflix lessons with some personal commentary. I love these lessons learned and I use them in different contexts all the time. (Netflix Technology Blog, 2010) (Townsend, 2016)

  1. Dorothy, you’re not in Kansas anymore. It’s going to be different, new challenges, new methods and a need to unlearn much of what you are used to doing.
  2. It’s not about cost savings. Focus on agility and elasticity.
  3. Co-tenancy is hard. Architecture and process matter more than ever.
  4. The best way to avoid failure is to fail constantly. This is one that many enterprises are unwilling to accept. Trading the expectation of uptime for the expectation of failure and architecting to tolerate failure.
  5. Learn with real scale, not toy models. Buying a marketecture is not advisable, you need to test with your workloads, at scale.
  6. Commit yourself. The cost motivator is not enough, the motivator has to me more.
  7. Talent. The complexity and blast radius of what you are embarking on is significant, you need the right talent to execute.

The consumption and effective use of ever-changing and complex services require us to think differently. Netflix consumes services on AWS and because they don’t have to build hardware, install operating systems, build object storage platforms, write APIs to abstract and orchestrate the infrastructure, etc… they can focus on making their application more resilient by building platforms like the Simian Army (Netflix Technology Blog, 2011) and other tools like Hystrix (Netflix Technology Blog, 2012) and Visceral (Netflix Technology Blog, 2016). The biggest problem with technologies that seemingly make things simpler is that the mass-market consumer looks for cost saving, they look for things to become easier, to lessen the hard dollar spend, to lessen the spend on talent, etc… and they don’t redirect time or dollars to the new challenges created by new technologies, this is a recipe for disaster.

References

InfoQ. (2017, February 22). Mastering Chaos – A Netflix Guide to Microservices. Retrieved June 17, 2018, from https://youtu.be/CZ3wIuvmHeM

Netflix Technology Blog. (2010, December 16). 5 Lessons We’ve Learned Using AWS – Netflix TechBlog – Medium. Retrieved June 17, 2018, from https://medium.com/netflix-techblog/5-lessons-weve-learned-using-aws-1f2a28588e4c

Netflix Technology Blog. (2011, July 19). The Netflix Simian Army – Netflix TechBlog – Medium. Retrieved June 17, 2018, from https://medium.com/netflix-techblog/the-netflix-simian-army-16e57fbab116

Netflix Technology Blog. (2012, November 26). Introducing Hystrix for Resilience Engineering – Netflix TechBlog – Medium. Retrieved June 17, 2018, from https://medium.com/netflix-techblog/introducing-hystrix-for-resilience-engineering-13531c1ab362

Netflix Technology Blog. (2016, August 03). Vizceral Open Source – Netflix TechBlog – Medium. Retrieved June 17, 2018, from https://medium.com/netflix-techblog/vizceral-open-source-acc0c32113fe

Townsend, K. (2016, February 17). 5 lessons IT learned from the Netflix cloud journey. Retrieved June 17, 2018, from https://www.techrepublic.com/article/5-lessons-it-learned-from-the-netflix-cloud-journey/

 

Essay Assignment

In an essay form, develop an example of an XSS vulnerability and an exploit which displays it. You will be expected to include a snippet of code which illustrates an XSS vulnerability and also provides some general discussion of XSS vulnerabilities.

 

Web Vulnerabilities Module Assignment

FIT – MGT5156 – Week 6

Discussion Post

Discuss how testing of ani-malware should be conducted.

The only absolute rule seems to be, don’t conduct anti-malware testing on your production systems. Testing of anti-malware should be performed in an isolated malware testing environment, and care should be taken to ensure that the system is completely isolated. For example, if you construct a malware test lab using a hypervisor and virtual machines, but keep the virtual machines on your production network, well, let’s say that’s not isolated. If correctly set up and configured hypervisors and virtual machines can be a testers best friend.

The Anti-Malware Testing Standards Organization (AMTSO) had developed and documented all sorts of testing guidance from Principles of Testing to Facilitating Testing. The key here is that the testing method must be safe and it must use methods which are generally accepted (consistent, unbiased, transparent, empirical, etc.) (AMTSO, 2018)

The use of generally accepted tools and toolkits for malware research, testing and analysis can easily overcome certain testing obstacles, allowing the analyst to focus on the testing methodology rather than the acceptance of a specific testing tool or platform. Safely conducting testing and ensuring that you are not endangering yourself and others is the burden of the analyst; the complexity of the technologies being used to construct isolated environments and the malware itself can make this complicated, so there is plenty of room for error.

My two favorite toolkits for malware testing are:

  • Flare VM (Kacherginsky, 2017) is essentially a PowerShell script that used BoxStarter and Chocolatey to turn a >= Windows 7 machine into a malware analysis distribution by quickly loading all the tools you need to do malware analysis.
  • REMnux is a Linux distribution for malware analysis and reverse-engineering. Like Flare VM, REMnux contains a set of tools to perform malware analysis and reverse engineering. Because REMnux is built on Linux (an open source operating system), it can be deployed using an install script like Flare VM or via a virtual machine (VM) image which packages the OS and tools making it easy to download, deploy and use.

There are a plethora of security-focused Linux distributions like Kali LinuxBackbox Linux, and the distribution which I use, Parrot Linux. All of these Linux based security-focused distributions offer some of the tools required for malware analysis, but none are focused on malware analysis like REMnux.

Anti-malware is a requirement; it is the last line of defense. Simple malware scanners, heuristics, activity/anomaly-based detection, is not enough. Next generation anti-malware and real-time scanning and discovery is a necessity. Malware can be identified using real-time detection technologies by monitoring activities like:

  • Attempts to alter restricted locations such as registry or startup files.
  • Attempts to modify executables.
  • Opening, deleting or editing files.
  • Attempts to write to or modify the boot sector.
  • Creating, accessing or adding macros to documents.

Not all anti-virus and anti-malware is created equal. avtest.org conducts independent analysis on the efficacy of anti-virus and anti-malware solutions, services like this can be an excellent resource for those looking to make the right decision when selecting anti-virus and anti-malware solutions.

I love this quote: “People have to understand that anti-virus is more like a seatbelt than an armored car: It might help you in an accident, but it might not,” Huger said. “There are some things you can do to make sure you don’t get into an accident in the first place, and those are the places to focus because things get dicey real quick when today’s malware gets past the outside defenses and onto the desktop.” (Kerbs, 2010)

References

Adams, J. (2016, June 8). Building a Vulnerability/Malware Test Lab. Retrieved June 6, 2018, from https://westoahu.hawaii.edu/cyber/building-a-vulnerability-malware-test-lab/

AMTSO. (2018, June 6). Welcome to the Anti-Malware Testing Standards Organization. Retrieved June 6, 2018, from https://www.amtso.org/

Kacherginsky, P. (2017, July 26). FLARE VM: The Windows Malware Analysis Distribution You’ve Always Needed! « FLARE VM: The Windows Malware Analysis Distribution You’ve Always Needed! Retrieved June 6, 2018, from https://www.fireeye.com/blog/threat-research/2017/07/flare-vm-the-windows-malware.html

Kerbs, B. (2010, June 25). Krebs on Security. Retrieved June 6, 2018, from https://krebsonsecurity.com/2010/06/anti-virus-is-a-poor-substitute-for-common-sense/

REMnux. (2018, June 6). REMnux: A Linux Toolkit for Reverse-Engineering and Analyzing Malware. Retrieved June 6, 2018, from https://remnux.org/

Williams, G. (2018, June 6). Detecting and Mitigating Cyber Threats and Attacks. Retrieved June 6, 2018, from https://www.coursera.org/learn/detecting-cyber-attacks/lecture/xE8ns/snort

 

Discussion Response 1

Good post. IMO it’s essential when discussing anti-malware to consider attack vectors. While anti-malware heuristics are getting better, aided by deep learning, the primary attack vector remains the user, and it seems unlikely that a change in trajectory is on the near-term horizon. Attackers use numerous attack vectors, and when I think about the needle used to inject the virus I think about examples such as:

  • Spam: Where email or social media are the delivery mechanism for malware.
  • Phishing, Spear Phishing, Spoofing, Pharming: Where attackers impersonate legitimate sources or destinations to trick unsuspecting victims to sites that capture personal information, exploit them, etc.

I use the examples above as a way to convey that exploitation often begins with the exploitation of an individual, this happens before the malware infects their system. A lack of knowledge, skill, vigilance, a sense of trust, etc. are all too often the root cause of an issue.

I just recently started taking a Coursera course called “Usable Security” and one area they focus on is HCI (Human-Computer Interaction). They stress how important it is for the designer to make the safeguards understandable and usable, not by the minority of experts but by the majority of casual users. They use two specific examples, at least so far. The first example is a medial cart with a proximity sensor. On paper, the proximity sensor seems like a great idea, but it turns out the doctors didn’t like it, so they covered the proximity sensors with styrofoam cups making the system less effective than the prior system which required the doctor to lock the computer after their session and a reasonable login timeout. The second is the SSL warning system in Firefox, the warning you get about an expired or unsigned certificate, sighting that most people don’t know what this means and add an exception without much thought.

Over the years I have observed the situations like the above with anti-malware software. The software slows the system down, do the tech user disables it or the anti-malware software reports so many false positives that the tech user disables it. The bottom line is there no replacement for human vigilance. I wonder if we can get to a place where the software can protect the user from himself or herself. Whatever the solution, I believe it will need to be frictionless, we aren’t there yet, but maybe someday.

References

Golbeck, J. (2018, June 10). Usable Security. Retrieved June 10, 2018, from https://www.coursera.org/learn/usable-security University of Maryland, College Park

Texas Tech University. (2018, June 10). Scams – Spam, Phishing, Spoofing and Pharming. Retrieved June 10, 2018, from https://www.ttu.edu/cybersecurity/lubbock/digital-life/digital-identity/scams-spam-phishing-spoofing-pharming.php

 

Discussion Response 2

All good points.  Seems almost inconceivable that a tester would be testing something for which they have no knowledge, but of course, we know this is often the case (and this goes way beyond anti-malware software).

You bring up a good point regarding what the tester is testing for. I think we have seen the era of “total security” products that cover everything from firewall to anti-malware, this is likely born from necessity and the need to move from reactive defensive anti-malware focused on scans to provocative strategies which attempt to keep the malware out rather than just focusing on detection and remediation after the fact. I think we are seeing systems emerge today which leverage data mining and deep learning to better protect users. With the level of sophistication being used in both malware and anti-malware I can’t imagine the role of the tester getting any easier. We live in interesting times and on a positive note, I think we can anticipate that they will only get more interesting.

 

Discussion Response 3

Good post. We’ve certainly seen some leaders in the security field have their ethics and motives questioned, most notably Kaspersky Lab (Volz, 2017). I have to admit in the case of Kaspersky Lab it’s hard to not wonder if this isn’t just a bunch of legislators who may have a bigger struggle with ethics and motivation than Kaspersky Lab does, this is a slippery slope. We live in a global economy and having read what Kaspersky Lab volunteered to do, I can’t wonder if this move may have some marketing flare associated with it. avtest.org has consistently rated Kaspersky Lab anti-malware among the best in the industry (AV-TEST, 2018). Is it possible that the Kremlin could have an influence on Kaspersky Lab? I suppose it is (Matlack, Riley & Robertson, 2015), but do I think this was the motivation for the legislation, not likely.

References

AV-TEST. (2018, April 06). AV-TEST – The Independent IT-Security Institute. Retrieved June 10, 2018, from https://www.av-test.org/en/award/2017/

Matlack, C., Riley, M., & Robertson, J. (2015, March 19). Cybersecurity: Kaspersky Has Close Ties to Russian Spies. Retrieved June 11, 2018, from https://www.bloomberg.com/news/articles/2015-03-19/cybersecurity-kaspersky-has-close-ties-to-russian-spies

Volz, D. (2017, December 12). Trump signs into law U.S. government ban on Kaspersky Lab software. Retrieved June 10, 2018, from https://www.reuters.com/article/us-usa-cyber-kaspersky/trump-signs-into-law-u-s-government-ban-on-kaspersky-lab-software-idUSKBN1E62V4?utm_source=applenews

 

Essay Assignment

How does anti-malware software detect viruses? What techniques are available, and how do they differ?

 

Viruses and Virus Detection Module Assignment

FIT – MGT5156 – Week 5

Discussion Post

Wow, week five already! The long weekend helped me get caught up and break the cycle I’ve been on, yay!

While not the latest in malware I decided to discuss WannaCry (also known as WCry or WanaCryptor). (Hunt, 2017) The reason for my choice is I have personal experience with this self-propagating (worm-like) ransomware. I have spent the last year working on various projects to mitigate the potential impact of ransom malware like WannaCry. In this post, I will explain the ransomware approach that WannaCry took, as it does not differ that dramatically from most recent ransomware. I will also talk a bit about some of the projects that I have been involved in, some of my customer’s concerns and some mitigation strategies like WORM (Write once read many, 2018) and Isolated Recovery (Korolov, 2016) that I have helped automate and implement for customers.

A simple explanation of WannaCry is that it encrypts files, rendering them useless and demands a ransom be paid in bitcoin, of course, to have the files decrypted.

Some basic information on WannaCry (Berry, Homan & Eitzman, 2017):

  1. WannaCry exploits a vulnerability in Microsoft’s Server Message Block (SMB) protocol (also known as CIFS of Common Internet File System). (Microsoft, 2017) For our purposes, we can consider SMB and CIFS are synonymous, but in the interest of education the SMB protocol was invented by IBM in the mid-1980’s and CIFS is Microsoft’s implementation of SMB.
  2. The WannaCry malware consists of two key functions, encryption, and propagation.
  3. WannaCry leverages an exploit called EternalBlue (NVD, 2017) to exploit the vulnerability in Microsoft’s SMB protocol implementation.
  4. What makes WannaCry and other ransomware attacks incredibly dangerous is that once on a corporate network they begin propagating using vulnerabilities in sharing protocols like SMB. It’s difficult to firewall these protocols because they are heavily used by users to share data across secure networks.

Ransomware attacks like WannaCry, NotPetya, and Locky created serious concern across many enterprises who store terabytes and petabytes of data on shares which are accessed using the SMB protocol. Organizations started thinking about how they could mitigate the risk of ransomware and what their recovery plan would be if they were hit with ransomware.

Many customers who share data on the Windows server platform leverage the VSS (Volume Shadow Copy Service) to take snapshots and protect / revision data. The idea of a snapshot is it is a point-in-time copy which a user can rollback to. Developers writing malicious software understand pervasive mitigation techniques like the use of VSS snapshots and they address them. Crafty developers of malicious software use vssadmin.exe to remove VSS snapshots (previous versions) so a user can’t rollback to an unencrypted version of the file(s).  (Abrams, 2016)

The obvious risk of having petabytes of data encrypted has created questions regarding the vulnerability of enterprise NAS (Network Attach Storage) devices from manufacturers like DellEMCNetApp, etc… Enterprise-class NAS devices provide additional safeguards like filesystems which are NTFS, no hooks to vssadmin, read-only snapshots, etc… so the protections are greater, but corporations are still concerned with zero-day exploits so additional mitigation approaches are being developed. Backing up your data is an obvious risk mitigation practice, but many enterprises are backing up to disk-based backup devices which are accessible via the SMB protocol so this has raised additional questions and cause for concern. A model called “Isolate Recovery” which leverages an air gap (Korolov, 2016) and other protection methods to ensure that data is protected, this is more of a programmatic implementation of a process then it is a technology.

Example Topology
[HOST] <-> [NETWORK] <-> [SHARED STORAGE] <-> [NETWORK] <-> [BACKUP TARGET]
Note: This is a simple representation but what is important to know here is that the HOST, SHARED STORAGE and BACKUP TARGET (could be a disk-based backup target or a replicated storage device) are all SMB accessible.

Example Isolated Recovery Topology
[HOST] <-> [NETWORK] <-> [SHARED STORAGE] <-> [NETWORK] <-> [BACKUP TARGET] <-> [NETWORK] <-> /AIR GAP/ <-> [ISOLATED RECOVERY TARGET]
Note: In this case, there is a tertiary copy of the data which resides in an isolated recovery environment which is air gapped. This paradigm could also be applied with only two copies of the data by air gapping the backup target, little tricker, but it can be done.

From a programmatic process perspective, the process might look something like this: https://gist.github.com/rbocchinfuso/a8b688546fad294d04281ab6eb632bfd#file-isolatedrecovery-md

A WORM (write once read many, not work as in virus) process is triggered via cron or some other scheduler or trigger mechanism might look something like this: https://gist.github.com/rbocchinfuso/b78a8a3a41021fc0df9c/#file-retentionlock-sh
Note:  This script is specific to WORM on a Data Domain disk bases backup device and leverages a feature called Retention Lock. The atime (access time) (Reys, 2008) of the file(s) is changed to a date in the future which places the file in WORM compliant mode until such date, once the date is reached the file reverts back to RW and can be deleted or modified.

References

Abrams, L. (2016, April 04). Why Everyone Should disable VSSAdmin.exe Now! Retrieved May 29, 2018, from https://www.bleepingcomputer.com/news/security/why-everyone-should-disable-vssadmin-exe-now/

Air gap (networking). (2018, May 27). Retrieved May 29, 2018, from https://en.wikipedia.org/wiki/Air_gap_(networking)

Berry, A., Homan, J., & Eitzman, R. (2017, May 23). WannaCry Malware Profile. Retrieved May 29, 2018, from https://www.fireeye.com/blog/threat-research/2017/05/wannacry-malware-profile.html

Hunt, T. (2017, May 18). Everything you need to know about the WannaCry / Wcry / WannaCrypt ransomware. Retrieved May 29, 2018, from https://www.troyhunt.com/everything-you-need-to-know-about-the-wannacrypt-ransomware/

Korolov, M. (2016, May 31). Will your backups protect you against ransomware? Retrieved May 29, 2018, from https://www.csoonline.com/article/3075385/backup-recovery/will-your-backups-protect-you-against-ransomware.html

Reys, G. (2008, April 11). atime, ctime and mtime in Unix filesystems. Retrieved May 29, 2018, from https://www.unixtutorial.org/2008/04/atime-ctime-mtime-in-unix-filesystems/

Microsoft. (2017, October 11). Microsoft Security Bulletin MS17-010 – Critical. Retrieved May 29, 2018, from https://docs.microsoft.com/en-us/security-updates/securitybulletins/2017/ms17-010

NVD. (2017, March 16). NVD – CVE-2017-0144 – NIST. Retrieved May 29, 2018, from https://www.bing.com/cr?IG=F94DFB39323448E6A46972AE19E1BB95&CID=304F78623FEB653F3DCF739C3E166483&rd=1&h=Dh-3S1QaiFT9tJkWNYeBAluFO8Y9ylpehdjBtEs6kAU&v=1&r=https://nvd.nist.gov/vuln/detail/CVE-2017-0144&p=DevEx.LB.1,5527.1

Write once read many. (2018, April 10). Retrieved May 29, 2018, from https://en.wikipedia.org/wiki/Write_once_read_many

 

Discussion Response 1

Good post on a very relevant and current topic.   IMO this trend will continue, the replacement of ASICs and RTOS with commodity ARM/x86 architecture and Linux makes it a lot easier for someone to create malicious code that can exploit routers across multiple manufacturers like Linksys, MikroTik, Netgear, and TP-Link.  I remember 20 years ago when if you wanted to go fast you used an ASIC and an RTOS like VxWorks, but x86 got so fast that ASICs no longer made sense for most applications, the ability to commoditize the hardware with a general purpose OS like Linux drove down cost and increased release velocity, a win all around.  With that said I think we may be on the doorstep fo a new cycle, we are seeing general purpose GPUs being used for everything from machine learning to crypto mining, these are essentially general purpose integrated circuits.  Power and environmental requirements are a big deal with general purpose GPUs and I believe we are on the doorstep of a cycle that sees the return of the ASIC. The TPU is is the beginning of what I believe will be a movement to go faster, get greener and more secure.

 

Discussion Response 2

Well done, as usual, well researched written and engaging exploration of different types of malware.
Response short this week because I spent most of my reading and responding time on Dr. Ford’s polymorphic coding challenge, a great exercise, wish there was more work like this.

 

Discussion Response 3

Dr. Ford’s polymorphic coding challenge

Anyone else given Dr. Ford’s polymorphic coding challenge a try?

Here is where I am:

  1. I am a Linux user so fired up a Win7 VM (suppose I could have done this in a dosbox or qemu freedos session, like Dr. Ford suggested, but been so long since I worked in 80 columns I find it unbearable).
  2. Used Bloodshed Dev-C++ w/ MinGW as C compiler.
  3. Got this far but I think I am missing something because obviously, the hex signature is the same for each .com file. Feel like this should not be the expected behavior.

Source Code: https://gist.github.com/5859ee8be77fd188f78b64eaa8538c62#file-hello-c

YouTube video of the compile, execute and hex signature view of hello0.com and hello1.com files: https://youtu.be/2vQOS4E1JB0
Note: Be sure to watch in 1080p HD quality.

I am not sure how I would alter the hex. I believe the hex code at the top of the stack needs to be what it is, the hex code for “Hello World!” just maps back to the hex for the ASCII characters.

When I look at hello0.com, hello1.com, etc… with a hex viewer the hex is the same, as you would expect. Does anyone have any thoughts on this? I would think a virus scanner would pick up this signature pretty easily.

 

Discussion Response 4

Replying to my own post with disassembled hello0.com and hello1.com files.
Wondering if this is polymorphic because hello.exe and the spawned hello?.com files have differing signatures.

> ndisasm hello0.com
00000000 0E push cs
00000001 1F pop ds
00000002 BA0E01 mov dx,0x10e
00000005 B409 mov ah,0x9
00000007 CD21 int 0x21
00000009 B8014C mov ax,0x4c01
0000000C CD21 int 0x21
0000000E 48 dec ax
0000000F 656C gs insb
00000011 6C insb
00000012 6F outsw
00000013 20576F and [bx+0x6f],dl
00000016 726C jc 0x84
00000018 642124 and [fs:si],sp

bocchrj@WIN7 C:\src\hello
> decompile –default-to ms-dos-com hello0.com

bocchrj@WIN7 C:\src\hello
> decompile –default-to ms-dos-com hello1.com

bocchrj@WIN7 C:\src\hello
> type hello0.asm
;;; Segment code (0C00:0100)

;; fn0C00_0100: 0C00:0100
fn0C00_0100 proc
push cs
pop ds
mov dx,010E
mov ah,09
int 21
mov ax,4C01
int 21
0C00:010E 48 65 He
0C00:0110 6C 6C 6F 20 57 6F 72 6C 64 21 24 llo World!$

bocchrj@WIN7 C:\src\hello
> type hello1.asm
;;; Segment code (0C00:0100)

;; fn0C00_0100: 0C00:0100
fn0C00_0100 proc
push cs
pop ds
mov dx,010E
mov ah,09
int 21
mov ax,4C01
int 21
0C00:010E 48 65 He
0C00:0110 6C 6C 6F 20 57 6F 72 6C 64 21 24 llo World!$

 

Essay Assignment

What are the financial and other models which drive malware? How do they impact the types of malware seen?

 

Malware History Module Assignment