Richard J. Bocchinfuso

"Be yourself; everyone else is already taken." – Oscar Wilde

FIT – MGT5157 – Week 7

The submissions for this assignment are posts in the assignment’s discussion. Below are the discussion posts for Richard Bocchinfuso, or you can view the full discussion.

One-off post because Defcon (Links to an external site.)Links to an external site. is happening in Las Vegas, if you wanna see what you’re trying to protect against I suggest the following week’s activities. 🙂  https://twitter.com/hashtag/defcon?src=hash (Links to an external site.)Links to an external site.

From solving fizzbuzz with TensorFlow (Links to an external site.)Links to an external site. to curing cancer (Links to an external site.)Links to an external site. and everything in between machine learning is changing how we programmatically solve problems, no longer focusing on loops, conditionals, and functions to solve a finite problem, but rather using training data and machine learning to teach the computer how to solve problems even if the inputs change from what is expected.  Essentially we are using training data to teach the computer to reason, we call this inference.  Solving the fizzbuzz problem with TensorFlow is a great example of how machine learning can be used to solve a simple problem.

If you are not familiar with fizzbuzz, it’s a common programmer interview questions.

Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.

A solution written in python might look like this:

Above you can see the python code solves the problem as presented, but I would have to alter the program to do the same things for a dataset from 101 to 1000. The ridiculous example of using TensorFlow to solve fizzbuzz is the work of Joel Grus and he wrote a hilarious blog (Links to an external site.)Links to an external site. on it. Even though it is a ridiculously complex solution to the problem, and it yields the wrong answer it is a great simple exercise to demonstrate the value of a neural network.

Maybe Elon Musk’s warning that AI could become “an immortal dictator from which we would never escape” is exaggerated for effect and Twitter fame, but it seems that AI will clearly be a strong field general with supreme control over the chosen battlefield.  It’s about more than autonomous machines, it’s about autonomous everything, it’s about not solving fizzbuzz with loops and conditional statements, but rather by building a neural network that can solve any variation of fizzbuzz.  It’s not about using malware signatures and firewall rules which statically protects north-south and east-west traffic or stateful packet inspection which requires a known signature but rather building a neural network that can continuously train and continuously improve protections, bad news, the hacker community is leveraging machine learning, deep learning and AI to find and exploit vulnerabilities.  It’s an arms race and both sides have fully operational uranium enrichment plants, we’ll call them TensorFlow, MXNet, Pytorch and a seemingly endless supply of uranium which we’ll call cloud GPUs. 🙂  Cisco calls this “The Network. Intuitive.” I only use Cisco as an example because they made a fancy commercial that that dramatizes the uses of Machine Learning, Deep Learning and Artificial Intelligence to build what they call “The Network. Intuitive.”  Oh, and who doesn’t love Tyrion Lannister?

 

Discussion: Identify requirements that should be considered when determining the locations and features of firewalls. What are some important steps to take to keep firewalls effective?

In the context of “determining the locations and features of firewalls,” I believe it is critical to understand how infrastructure and traffic patterns are evolving. Firewalls have always been essential in filtering and protecting north-south network traffic. The emergence of technologies like virtualization and software-defined networking (SDN) has dramatically increased east-west network traffic. Like long-range ballistic missiles have impacted aspects of the layer one protection provided by the oceans, these technologies have negated aspects of the physical layer one protection provided by physical network segmentation. Technologies like virtualization and SDN have accelerated the development of next-generation firewalls (NGFW) that deliver a “deep-packet inspection firewall that moves beyond port/protocol inspection and blocking to add application-level inspection, intrusion prevention, and bringing intelligence from outside the firewall.” (Aldorisio, 2017)

Most people are reasonably familiar with perimeter security best practice.
A model that many people are familiar with is the bastion host topology. The bastion host topology would be the type firewall topology deployed on most home networks where the LAN (Intranet) and WAN (Internet) are firewalled by a cable modem which acts as the router and firewall.

A more complex network may utilize a screened subnet topology the implementation of a DMZ (Demilitarized Zone). In the screened subnet topology, systems that host public services are placed on the DMZ subnet rather than on the LAN subnet. The screened subnet topology separates public services from the LAN or trusted subnet by locating publically accessible services in the DMZ. This approach adds a layer of protection so that if a publically available service becomes compromised, there is an added layer of security aimed at stopping an attacker from traversing from the DMZ subnet to the LAN subnet.

A topology which takes the screened subnet a step further is a dual firewall topology where the DMZ (Demilitarized Zone) is placed between two firewalls. The dual firewall topology is a common topology implemented by networking security professional, often using firewalls from different providers as an added layer of protection should an attacker identify and exploit a vulnerability in a vendors software.

Enterprise-grade firewalls also allow for more complex topologies which extend the topologies described above beyond internal (LAN), external (WAN) and DMZ networks. Enterprise-grade firewalls support more interfaces, faster processors which allow more layered intelligent services, higher throughput, etc. The support of software features such as virtual interfaces, VLANs, VLAN tagging, etc. allows for greater network segmentation enabling the ideas discussed above to be applied discretely based on requirements.

Some steps to maintain firewall effectiveness include (Mohan, 2013):

  • Clearly defining a firewall change management plan
  • Test the impact of firewall policy changes
  • Clean up and optimize firewall rule base
  • Schedule regular firewall security audits
  • Monitor user access to firewalls and control who can modify firewall configuration
  • Update firewall software regularly
  • Centralize firewall management for multi-vendor firewalls

References

Aldorisio, J. (2017, November 27). What is a Next Generation Firewall? Learn about the differences between NGFW and traditional firewalls. Retrieved August 17, 2018, from https://digitalguardian.com/blog/what-next-generation-firewall-learn-about-differences-between-ngfw-and-traditional-firewalls

Chapple, M. (2018, August 17). Choosing the right firewall topology: Bastion host, screened subnet or dual firewalls. Retrieved August 17, 2018, from https://searchsecurity.techtarget.com/tip/Choosing-the-right-firewall-topology-Bastion-host-screened-subnet-or-dual-firewalls

Ergun, O. (2015, January 10). What is East-West and North-South Traffic | Datacenter Design. Retrieved August 17, 2018, from https://orhanergun.net/2015/01/east-west-north-south-traffic/

Hossain, M. (2014, May 21). Trends in Data Center Security: Part 1 – Traffic Trends. Retrieved August 17, 2018, from https://blogs.cisco.com/security/trends-in-data-center-security-part-1-traffic-trends

How Does Micro-Segmentation Help Security? Explanation. (n.d.). Retrieved August 17, 2018, from https://www.sdxcentral.com/sdn/network-virtualization/definitions/how-does-micro-segmentation-help-security-explanation/

Mohan, V. (2013). Best Practice for Effective Firewall Management. Retrieved August 17, 2018, from http://cdn.swcdn.net/creative/v9.3/pdf/Whitepapers/Best_Practices_for_Effective_Firewall_Management.pdf

Network and Traffic Segmentation. (n.d.). Retrieved August 17, 2018, from https://www.pluribusnetworks.com/solutions/network-traffic-segmentation/

Scott, good post.  Question: Do you think that it will be possible to compete in the enterprise NGFW market without a cloud-based model?  My contention is that the aggregation and profiling of data gathered from deep packet inspection across the entire the industry will allow NGFW OEMs to better identify and address threats.  These datasets will also function as training data for machine learning, deep learning and AI models.  My belief is that the cloud is and will continue to play a huge role in the innovation and adoption of NGFW technologies.

Assignment

[google-drive-embed url=”https://docs.google.com/document/d/1V0cvh6C5S2EVwIJrYSJeezlS4tsBvHiJ-0WabSzYjyI/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5157 – Week 7 – Assignment 6″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

FIT – MGT5157 – Week 6

The submissions for this assignment are posts in the assignment’s discussion. Below are the discussion posts for Richard Bocchinfuso, or you can view the full discussion.

Discussion: Describe the basis for effective collaboration of security defenses within and between organizations.

This is an interesting question. I think ten years ago effective collaboration of security defenses within and between organizations would be highly dependent on effective open communication between these organizations. Today I think the effective collaboration of security defenses is being aided by two core technology shifts:

  1. Cloud
  2. Machine Learning, Deep Learning, AI

Let’s start with the cloud. Today’s security providers are increasingly becoming cloud-enabled, they are relying on the aggregation of massive data sets (big data) for heuristics on massive compute farms that far surpass what is possible in a heuristics engine on a laptop, desktop or mobile device. Just about every security technology provider is leveraging the cloud and vast resources it provides. When organizations buy into the cloud-based security paradigms it is the equivalent of sharing and communicating information, but this information is now being aggregated, anonymized, analyzed and cross-referenced in real-time.  (Quora Contributor, 2018)

Machine learning, deep learning, and AI are not just buzzwords, they are technologies that harness data and continuously train models that can begin to see things which are not visible to the naked eye. These technologies are greatly altering how we think about security. Security providers like AlertLogic (Links to an external site.)Links to an external site.Secureworks (Links to an external site.)Links to an external site. and many others that focus on IPS/IDS and incident responses models that leverage data which is anonymized, but aggregated and analyzed across their entire customer base, this has tremendous value. Security providers like Tanium (Links to an external site.)Links to an external site. and Panda Security (Links to an external site.)Links to an external site. and others who focus on end-point security also use cloud technologies, big data and machine learning to provide superior heuristics. For example, the embedded anti-malware in Windows 10 makes use of “cloud-based protection” to better protect users, users are opted-in to collaborating and opting-out requires the user intervention that is buried in the bowels of the operating system and anti-malware (Windows Defender) configuration settings.

Collaboration and engagement require a focus on Human-Computer Interaction (HCI) to drive system usability and adoption, this is especially true in the field of security. Users vary and they have different expectations of the systems they interact with, a simple blacklist of whitelist approach no longer gets the job done, these approaches slow productivity and encourage working around the system. (Coursera, 2018)

Intelligent security systems which leverage AI may be able to adapt security protocols based on user usage profiles. For example, what users took the lollipop and what users didn’t and should how security is enforced for these two user types differ? (DreamHost, 2018)

To close out my thoughts this week, I will end with an example of a security problem that is not a platform problem, but rather a use problem, as is often the case. For those of us who have used Amazon (AWS) S3, the AWS object storage servicer we know that AWS offers extremely fine-grained ACLs for S3 buckets, the security paradigm is quite robust and defaults to no-access, but this robustness and fine-grained programmatic and composable infrastructure comes with complexity (Amazon, 2018), complexity leads to usability challenges which leads us to exposing data which is not intended to be exposed. This week that victim was GoDaddy who exposed an S3 bucket containing configuration data for tens of thousands of systems, as well as sensitive pricing information, apropos given our collective conversations last week regarding GoDaddy and DNS registrars.  (Chickowski, 2018)

With > 80% of all corporations experiencing a hack of some sort, exploitation is on the rise and there is no end in sight. (Lipka, 2015) As we continue towards a public cloud world, platforms are providing more choice, easier access, and the ability to be agiler, build faster and come to market faster but we’ve lost the simplistic nature of layer 1 security. We have to have security systems that live at a layer above layer 1 human interaction, and communication. I believe that Progress will depend on the ability of the security systems of today and tomorrow to facilitate zero touch collaboration in an automate and secure way.

References

Amazon. (2018, August 10). Bucket Policy Examples. Retrieved August 10, 2018, from https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html

Chickowski, E. (2018, August 9). AWS Employee Flub Exposes S3 Bucket Containing GoDaddy Server Configuration and Pricing Models. Retrieved August 10, 2018, from https://www.darkreading.com/attacks-breaches/aws-employee-flub-exposes-s3-bucket-containing-godaddy-server-configuration-and-pricing-models/d/d-id/1332525

Coursera. (2018, August 10). Usable Security. Retrieved August 10, 2018, from https://www.coursera.org/lecture/usable-security/course-intro-60olh

DreamHost. (2018, January 30). Take This Lollipop… I Dare You! Retrieved August 10, 2018, from https://www.dreamhost.com/blog/take-this-lollipop-i-dare-you/

ElsonCabral. (2011, October 26). Take This Lollipop. Retrieved August 10, 2018, from https://www.youtube.com/watch?v=pbQm-nIMo_A

Lipka, M. (2015, June 05). Percentage of companies that report systems hacked. Retrieved August 10, 2018, from https://www.cbsnews.com/news/percentage-of-companies-that-report-systems-hacked/

Quora Contributor. (2018, February 15). How Will Artificial Intelligence And Machine Learning Impact Cyber Security? Retrieved August 10, 2018, from https://www.forbes.com/sites/quora/2018/02/15/how-will-artificial-intelligence-and-machine-learning-impact-cyber-security/#34f878166147

James, I would go as far as to say unless mandated by a regulatory requirement very few enterprises are advertising breaches and even when mandated by regulatory bodies they are pushing the boundaries of the disclosure.  For example, Equifax took six weeks to disclose the hack, not the only major enterprise in a regulated industry looking to delay disclosure.   The bigger the organization the more sensitive the data the tighter and more broad sweeping the NDAs.  Ed Snowden’s are not falling out of trees and the number of statistical breaches, when contrasted with the number of reported breachs, say there is more interest in obfuscation than there is in disclosure.  Sure, the OTR conversations can happen at an InfoSec meetup, but the bigger the enterprise the more isolated and focused exposure is becoming, with access to systems, processes, conversations, etc. becoming so tightly governed that it’s getting harder and harder to assemble a full picture of a situation. Those who do have the complete picture don’t attend InfoSec meetups, they are busy having dinner at Le Bernardin. 🙂

I think it’s a fair assumption to assume we know only a small fraction of what’s happening and that the preponderance of the most diabolical stuff never makes it into the mainstream.  As technology becomes a profit center for every company, we will see more and more of this.  The days of we are a manufacturing company and tech is a cost center are over, big data, analytics, and machine learning are driving every industry, with the CMO spending more on technology than the CIO.

Not saying we shouldn’t keep trying, but I believe we will see significant innovations that will change the game, relying less on the good behavior of people and more on the machine to make and monitor decisions.  Andrew mentioned the Target breach, there is no reason that and PLC network for HVAC controls should have >= layer 2 access to a network for payment processing, IMO layer 1 is even questionable, what should have been disclosed is the name of the network architect who built that infrastructure and everyone who looked at it thereafter and didn’t yell from the rooftop.

 

References

Isidore, C. (2017, September 8). Equifax’s delayed hack disclosure: Did it break the law? Retrieved August 10, 2018, from https://money.cnn.com/2017/09/08/technology/equifax-hack-disclosure/

McLellan, L. (n.d.). By 2017 the CMO will Spend More on IT Than the CIO. Retrieved August 10, 2018, from https://www.gartner.com/webinar/1871515

 

Andrew, let’s assume that an organization or organizations have a well designed and implemented network infrastructure using platforms from providers like Cisco, Juniper, Palo Alto, etc.

Image result for good private spine leaf design principles

Organizations acting together (e.g. supplier and buyers in a supply chain system), can secure their data exchange on encrypted channels, they can use multi-factor authentication, they can use Geo-fencing, they can use certificate-based PKI Smart Cards, but what if the exploit resides in the router or firewall code?  What if there is an APT (Advances Persistent Thread) against organization X which exploits some vulnerability in the router or firewall code?  When organization X identifies the breach, do they communicate that they have been breached?  If so to whom?  While agreeing that open communication is key to slowing the bad guys, reducing the blast radius, etc. I also believe there are few organizations willing to volunteer that they have been breached, this is especially true if the breach has to do with human error, which they so often do.  The reports we see are typically driven by watchdog groups, like the recent GoDaddy breach (Links to an external site.)Links to an external site.; by a regulatory requirement to disclose like the Target (Links to an external site.)Links to an external site. or Equifax (Links to an external site.)Links to an external site.breach; by a catastrophe like the CodeSpaces (Links to an external site.)Links to an external site. breach, but it most cases the motivation to disclose is not very strong at all.  I believe the answer resides in anonymizing the breach reports, focusing a little less on corporate accountability and more of getting the data needed to start programmatically plugging the gaps, making the system less punitive and manding more tech to secure the network so the machine may save us from ourselves.  In essence more carrot and less stick.  For example what if in the case of Target there was stateful packet inspection which saw both PLC data and payment processing data flowing on the same network, and took automated action to segment the traffic, shut the traffic down, etc.  Sure these technologies will get hacked as well, but people are inherently poor binary decision makers and I think we will a different paradigm emerge.  I think we are seeing it already.

Scott, enjoyed the post.  I think this is my first comment on one of your post in this class.  I like Canvas 1000x better than the old LMS but it feels like this class has more students or something because the discussion threads are long.  Anyway, I have a few friends who work for FireEye, they are 100% focused on APTs (Advanced Persistent Threats) and what they will say is that FireEye focuses on four things: Prevent, Detect, Contain, Resolve. While prevention and detection are important with APTs the bad guys will typically find a way in so they put a heavy focus on containment.  What containment is about is about is not letting the bad guys leave one they are in.  I always think about the bar scene from the movie “A Bronx Tale”.  The bad guys walk it the bar, but then they are contained. 🙂

Stating to see more and more focus on preventing data exfiltration (DLP).

Assignment

[google-drive-embed url=”https://docs.google.com/document/d/1BvA-CaYSKm7rdegCd7t-gD-UUwki15mbCroNSoZRhwo/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5157 – Week 6 – Assignment 5″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

FIT – MGT5157 – Week 5

The submissions for this assignment are posts in the assignment’s discussion. Below are the discussion posts for Richard Bocchinfuso, or you can view the full discussion.

Discussion: What is the market for DNS control? Who are the big players in managing domain names? Can domain names be exploited?

The market for DNS control is competitive. There is more to DNS control than just owning the DNS resolution, companies like GoDaddy (Links to an external site.)Links to an external site. are domain registrars, but they also provide services which leverage those domains, services like web hosting and email.  Organizations like GoDaddy started as registrars and grew into internet service providers, the same is true of organizations like AWS who started as service providers and saw an opportunity to be the domain registrar so AWS started a service called Route53 (cool name because port 53 is the port that DNS runs on).

Domain names are controlled by ICANN (Links to an external site.)Links to an external site. (Internet Corporation for Assigned Names and Numbers). ICANN is a non-profit organization that acts as the governing body tracking domain names maintained by domain name registrars like GoDaddy and NameCheap. The ICANN database master domain name database can be queried using “whois”.

Authoritative DNS root servers are controlled by only a few key players, these hostnames actually point to an elaborate network or DNS servers around the world.

Source:  Iana. (2018, August 3). Root Servers. Retrieved August 3, 2018, from https://www.iana.org/domains/root/servers

It’s not hard to understand why VeriSign is at the top of the list when you understand the relationship between ICANN and VeriSign.  As you look down the list, not surprisingly there is a correlation between the authoritative DNS root servers and Class A address ownership. WIth the DoD owning 12 Class A addresses you would imagine they would have an authoritative root DNS server.

Source:  Pingdom. (2008, February 13). Where did all the IP numbers go? The US Department of Defense has them. Retrieved August 3, 2018, from https://royal.pingdom.com/2008/02/13/where-did-all-the-ip-numbers-go-the-us-department-of-defense-has-them/

 

Querying the ICANN database for s specific domain name will return relevant information about the domain name as well as the registrar.

Above we can see that a “whois bocchinfuso.net” reveal the registrar as NameCheap, NameCheap IANA ID, etc…

Each domain registrar is assigned a registrar IANA (Internet Assigned Numbers Authority) ID by ICANN.

DomainState (Links to an external site.)Links to an external site. tracks statistics about domain registrars so we can easily see who the major registrars are.

Source:  DomainState. (2018, August 3). Registrar Stats: Top Registrars, TLD Marketshare, Top Registrars by Country. Retrieved August 3, 2018, from https://www.domainstate.com/registrar-stats.html

GoDaddy is ~ 6x larger than the number two registrar. GoDaddy has grown to nearly 60 million registered domains both organically and through acquisition.

Yes, DNS can be exploited. DNS allows attackers to more easily identify their attack vector. DNS servers are able o perform both forward (mapping a DNS name to an IP address) and reverse lookups (mapping an IP address to a DNS name) this allows attackers to open the internet phone book, easily acquire a target and commence an advanced persistent threat (APT).

Domain names are often linked with branding, so once an APT commences against a domain the resident can’t move.  DNS can also play a role in protecting against threats. Services like Quad9 (Links to an external site.)Links to an external site. and OpenDNS (Links to an external site.)Links to an external site. provide DNS resolvers which are security aware. These DNS resolvers block access to malicious domains.

Because DNS names are how we refer to internet properties typosquatting  (Links to an external site.)Links to an external site.is a popular DNS threat. Typosquatting is a practice where someone uses a DNS name that is similar to a popular domain name capturing everyone who typos the popular domain name.

DNS servers are ideal DDoS (Links to an external site.)Links to an external site. attack targets because the inability to resolve DNS addresses has an impact across the entire network.

Registrar of domain hijacking (Links to an external site.)Links to an external site. is when the attacker gains access to your domain by exploiting the registrar. Once the attacker has access to the domain records they can do anything from changing the A record to a new location to transferring the domain to a new owner. There are safeguards that can be put in place to protect unauthorized transfers, but someone gaining access to your registrar is not a good situation.

DNS is massive directory and to decrease latency DNS caches are placed strategically around the Internet. These caches can be compromised by an attacker and resolved names may take an unsuspecting user to a malicious website. This is called DNS spoofing or cache poisoning. (Links to an external site.)Links to an external site.

These are just a few DNS attack vectors, there are plenty of others. The convenience of DNS is also what creates the risk. DNS makes it easy for us to find our favorite web properties like netfix.com, but it also makes it easy for an attacker to find netflix.com.

 

References

DomainState. (2018, August 3). Registrar Stats: Top Registrars, TLD Marketshare, Top Registrars by Country. Retrieved August 3, 2018, from https://www.domainstate.com/registrar-stats.html

Iana. (2018, August 3). Root Servers. Retrieved August 3, 2018, from https://www.iana.org/domains/root/servers

ICANN. (2018, August 3). ICANN64 Fellowship Application Round Now Open. Retrieved August 3, 2018, from https://www.icann.org/

Mohan, R. (2011, October 5). Five DNS Threats You Should Protect Against. Retrieved August 3, 2018, from https://www.securityweek.com/five-dns-threats-you-should-protect-against

Pingdom. (2008, February 13). Where did all the IP numbers go? The US Department of Defense has them. Retrieved August 3, 2018, from https://royal.pingdom.com/2008/02/13/where-did-all-the-ip-numbers-go-the-us-department-of-defense-has-them/

Carmeshia, I enjoyed your post. You bring up an interesting point regarding centralization, control, and exploitation. What do you think is more secure, a centralized or decentralized DNS registrar system?

With the increase in APTs (advanced persistent threats) I tend to favor decentralization, but everyone has a perspective, interested in hearing yours.

Nawar, good post, I enjoyed reading it. While DNS is not a security-centric protocol, few protocols are. The network’s reliance on DNS is both a good and bad thing. Because DNS name resolution is such a critical network function, it is the target of attacks like DDoS attacks because the blast radius of an attack on DNS is significant. With this said the essential nature of DNS also has many focused on protecting and mitigating risk. Services like Cloudflare (Links to an external site.)Links to an external site.Akamai (Links to an external site.)Links to an external site.Imperva Incapsula (Links to an external site.)Links to an external site.Project Shield (Links to an external site.)Links to an external site. and others have built robust Anti-DDoS system to identify and shed DDoS traffic.

Sharing some pretty interesting data when comparing the top DNS providers.

https://www.datanyze.com/market-share/dns/Datanyze%20Universe/ (Links to an external site.)Links to an external site.

When you start to segment domains by Alexa rank (Links to an external site.)Links to an external site. GoDaddy gets outranked by Cloudflare, Amazon Route 53, Akamai, and Google DNS pretty consistently.

dns market share

Some good detail on why in this article:  https://stratusly.com/best-dns-hosting-cloudflare-dns-vs-dyn-vs-route-53-vs-dns-made-easy-vs-google-cloud-dns/ (Links to an external site.)Links to an external site.

The moral of the story here is that while GoDaddy appears to the Goliath, they are in terms of domain name registration volume, but the FANG (Facebook, Apple, Netflix, Google) type companies (Links to an external site.)Links to an external site. own the internet traffic the volume DNS registration game is becoming a commodity.  GoDaddy has the first mover advantage but competitors like namecheap.net (Links to an external site.)Links to an external site.and name.com (Links to an external site.)Links to an external site. are coming after them.  With Netflix accounting for nearly 40% of all internet traffic (Links to an external site.)Links to an external site., the FANG companies matter, and I don’t think the Cloudflare’s, Akamai’s, Amazon Route 53’s of the world want to chase the GoDaddy subscriber base.

Assignment

[google-drive-embed url=”https://docs.google.com/document/d/1WKk18vES6E865CxO6Rh7NHRjXVlCfcseu9i8lEL7qYs/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5157 – Week 5 – Assignment 4″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

FIT – MGT5157 – Week 3

The submissions for this assignment are posts in the assignment’s discussion. Below are the discussion posts for Richard Bocchinfuso, or you can view the full discussion.

Discussion: Describe the differences between IPv6 and IPv4. What implications does it have on networks? On the user? What could be done to speed up the transition process?

First let’s talk about a major catalyst for the development and adoption of IPv6, the idea that the internet would exhaust the available IP address space. This prediction was made back in 2011 and it was stated the Internet would exhaust all available IP addresses by 4 AM on February 2, 2011. (Kessler, 2011) Here we are 2725 days later and the “IPcalypse” or “ARPAgeddon” has yet to happen, in-fact @IPv4Countdown (Links to an external site.)Links to an external site. is still foreshadowing the IPv4 doomsday scenarios via twitter. So what is the deal? Well, it’s true the available IPv4 address space is limited and with a pool of addresses of slightly less than 4.3 billion (2^32, more on this later) (Links to an external site.)Links to an external site.. It is important to remember that many of these predictions predate Al Gore taking credit for creating the internet. Sorry Bob Kahn and Vint Cert (Links to an external site.)Links to an external site., it was Al Gore who made this happen.

Back in the 1990s we didn’t have visibility to technologies like CIDR (Classless Interdomain Routing) (Links to an external site.)Links to an external site. and NAT (Network Address Translation) (Links to an external site.)Links to an external site.. In addition many us today use techniques like reverse proxying and proxy ARPing. Simplistically this allows something like NGINX (Links to an external site.)Links to an external site. to act as a proxy (middleman) where all services can be placed on a single port behind a single public IP address and traffic can be appropriately routed and proxied using a single public IP address.

For example, a snippet of an NGINX reverse proxy config might look something like this:

server {
    listen 80;
    server_name site.foo.com;
    location / {
        access_log on;
        client_max_body_size 500 M;
        proxy_pass http: //INTERNAL_HOSTNAME_OR_IP;
            proxy_set_header X - Real - IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X - Forwarded - For $proxy_add_x_forwarded_for;
    }
}

server {
    listen 80;
    server_name site.bar.com;
    location / {
        access_log on;
        client_max_body_size 500 M;
        proxy_pass http: //INTERNAL_HOSTNAME_OR_IP;
            proxy_set_header X - Real - IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X - Forwarded - For $proxy_add_x_forwarded_for;
    }
}

Let’s assume that there are two DNS A records (Links to an external site.)Links to an external site., one for site.bar.com and one for site.foo.bar that both point to the same IP address with a web server running on port 80 on both machines.  How does site.bar.com know to go to web server A and site.foo.bar know to go to web server B? The answer is a reverse proxy which can proxy the request, this is what we see above.

I use this configuration from two sites which I host bocchinfuso.net and gotitsolutions.org

A dig (domain information groper) of both of these domains reveals that their A records point to the same IP address, the NGINX (Links to an external site.)Links to an external site. reverse proxy does the work to route to the proper server or services based on the requested server name and proxies the traffic back to the client. nslookup would work as well if you would like to try but dig a little cleaner display for posting below.

$ dig bocchinfuso.net A +short
173.63.111.136
$ dig gotitsolutions.org A +short
173.63.111.136

NGINX (Links to an external site.)Links to an external site. is a popular web server, which can also be used for reverse proxying like I am using it above, as well as load-balancing.

IPv6 (Internet Protocol version 6) is the next generation or successor to IPv4 (Intenet Protocol version 4). IPv4 is responsible for assigning a numerical address using four octets which are each 8-bits to comprise a 32-bit address. IPv4 addresses are comprised of 4 numbers between 0 and 255.

Source:  ZeusDB. (2015, July 30). Understanding IP Addresses – IPv4 vs IPv6.

IPv6 addresses consist of eight x 16-bit segments to comprise a 128-bit address, giving IPv6 a total address space of 2^128 (~ 340.3 undecillion) (Links to an external site.)Links to an external site. which is a pretty big address space. To put 2^128 into perspective it is enough available IP address space for every person on the planet to personally have 2^95 or about 39.6 octillion IP addresses (Links to an external site.)Links to an external site.. That’s a lot of IP address space.

Source:  ZeusDB. (2015, July 30). Understanding IP Addresses – IPv4 vs IPv6.

One of the challenges with IPv6 is that it is not easily interchangeable with IPv4, this has slowed adoption and with the use of proxy, tunneling, etc. technology I believe the sense of urgency is not what it once was. IPv6 adoption has been slow, but with the rapid adoption of IoT and the number of devices being brought online we could begin to see a significant increase in the IPv6 adoption rate. In 2002 Cisco forecasted that IPv6 would be fully adopted by 2007.

Source:  Pingdom. (2009, March 06). A crisis in the making: Only 4% of the Internet supports IPv6.

The Internet Society State of IPv6 Deployment 2017 paper states that ~ 9 million domains and 23% of networks are advertising IPv6 connectivity. When we look at the adoption of IPv6 I think this table does a nice job outlining the where IPv4 and IPv6 sit relative to each other.

Source:  Internet Society. (2017, May 25). State of IPv6 Deployment 2017.

The move to IPv6 will be nearly invisible from a user perspective, our carriers (cable modems, cellular devices, etc…) abstract us from the underpinnings of how things work. Our request to google.com will magically resolve to an IPv6 address vs an IPv4 address and it won’t matter to the user.

For example here is a dig of google.com to return google[dot]com’s IPv4 and IPv6 address.

$ dig google.com A google.com AAAA +short
172.217.3.46
2607:f8b0:4004:80e::200e

Note: If you’re a Linux user you know how to use dig, MacOS should have dig and if you’re on Windows and don’t already know how to get access to dig the easier path can be found here: https://www.danesparza.net/2011/05/using-the-dig-dns-tool-on-windows-7/ (Links to an external site.)Links to an external site.

The adoption rate if IPv^ could be increased by simplifying interoperability between IPv4 and IPv6. The exhaustion of the IPv4 address space and the exponential increase in connected devices is upon us and this may be the catalyst the industry needs to simplify interoperability and speed adoption.

With the above said, interestingly IPv6 adoption is slowing.

McCarthy, K. (2018, May 22). IPv6 growth is slowing and no one knows why. Let’s see if El Reg can address what’s going on.

I think it’s a chicken or the egg situation.  There have been IPv4 address space concerns for years, the heavy lift required to adopt IPv6 led to slow and low adoption rates which pushed innovation in a different direction. With the use of a reverse proxy maybe I don’t need any more public address space, etc… Only time will tell, but this is foundational infrastructure akin to the interstate highway system, change will be a long journey and it’s possible we will start to build new infrastructure before we ever reach the destination.

 

References

Hogg, S. (2015, September 22). ARIN Finally Runs Out of IPv4 Addresses. Retrieved July 20, 2018, from https://www.networkworld.com/article/2985340/ipv6/arin-finally-runs-out-of-ipv4-addresses.html

Internet Society. (2017, May 25). State of IPv6 Deployment 2017. Retrieved July 20, 2018, from https://www.internetsociety.org/resources/doc/2017/state-of-ipv6-deployment-2017/

Kessler, S. (2011, January 22). The Internet Is Running Out of Space…Kind Of. Retrieved July 20, 2018, from https://mashable.com/2011/01/22/the-internet-is-running-out-of-space-kind-of/#49ZaFObrqPqW

McCarthy, K. (2018, May 22). IPv6 growth is slowing and no one knows why. Let’s see if El Reg can address what’s going on. Retrieved July 20, 2018, from https://www.theregister.co.uk/2018/05/21/ipv6_growth_is_slowing_and_no_one_knows_why/

NGINX. (2018, July 20). High Performance Load Balancer, Web Server, & Reverse Proxy. Retrieved July 20, 2018, from https://www.nginx.com/

Pingdom. (2009, March 06). A crisis in the making: Only 4% of the Internet supports IPv6. Retrieved July 20, 2018, from https://royal.pingdom.com/2009/03/06/a-crisis-in-the-making-only-4-of-the-internet-supports-ipv6/

Pingdom. (2017, August 22). Tongue twister: The number of possible IPv6 addresses read out loud. Retrieved July 20, 2018, from https://royal.pingdom.com/2009/05/26/the-number-of-possible-ipv6-addresses-read-out-loud/

Wigmore, I. (2009, January 14). IPv6 addresses – how many is that in numbers? Retrieved July 20, 2018, from https://itknowledgeexchange.techtarget.com/whatis/ipv6-addresses-how-many-is-that-in-numbers/

ZeusDB. (2015, July 30). Understanding IP Addresses – IPv4 vs IPv6. Retrieved July 20, 2018, from https://www.zeusdb.com/blog/understanding-ip-addresses-ipv4-vs-ipv6/

Yacine, NAT certainly has helped ease the IPv4 address space issue, as did other things like proxy ARPing (Links to an external site.)Links to an external site. and reverse proxying (Links to an external site.)Links to an external site., all techniques to use less address space (also pretty important for network security).

arping can be a handy little tool to see if you can contact a system and what MAC address it is arping on.

> arp-ping.exe -s 0.0.0.0 192.168.30.15
Reply that B8:CA:3A:D1:7E:AB is 192.168.30.15 in 4.604ms
Reply that B8:CA:3A:D1:7E:AB is 192.168.30.15 in 15.745ms
Reply that B8:CA:3A:D1:7E:AB is 192.168.30.15 in 15.642ms
Reply that B8:CA:3A:D1:7E:AB is 192.168.30.15 in 15.623ms

While IPv6 may provide a ton of IP address space, I don’t think the use of NAT and poxies will change, these techniques are as much about security as they are extending the address space.

James, love the profile pic.  Setting a hard date to kill IPv4 is a stick no carrot.  The IPv6 shift discussion needs to be driven by the market makers, they should make it compelling enough for enterprises to begin moving faster.  The market makers can make a huge impact, Netflix accounts for > 1/3 of all internet traffic (Links to an external site.)Links to an external site., people a rushing to AWS, Azure and GCP at alarming rates and the only procurers of tech that really matter are Amazon, Apple, Facebook, Alphabet, Microsoft, Tencent and Alibaba.  If the market makers move everyone else will follow, they will have no choice.  Why aren’t they moving faster?

This is further compounded by the fact that Cisco, Juniper, Arista or any other mainstream networking equipment provider are not mentioned above.  It’s no secret that Amazon, Facebook, and others are running their own intellectual property to solve lots of legacy networking issues.  Facebook is building and deploying their own switches and load balancers (Links to an external site.)Links to an external site. and AWS wrote their own networking stack because VPC needs could not be handled by traditional networking provider VLANs and overlay networks.  Now we are seeing the adoption of SDN (Links to an external site.)Links to an external site. increase which could speed up IPv6 adoption of could slow it down.

Assignment

[google-drive-embed url=”https://docs.google.com/document/d/1cLkTRQEDEoD6v49Ywu7Jkarc5T4FE-ggc0Mc91KG6H8/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5157 – Week 3 – Assignment 3″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

FIT – MGT5157 – Week 2

The submissions for this assignment are posts in the assignment’s discussion. Below are the discussion posts for Richard Bocchinfuso, or you can view the full discussion.

from 2.3 Discussion

Jul 13, 2018 10:48pm

Richard Bocchinfuso

What is the reason behind packet loss? What can protocols do to limit packet loss? Are there tools available for providers and consumers that identify the source of packet loss?

What is the reason behind packet loss?

“A primary cause of packet loss is the finite size of the buffer involved”  (Wu & Irwin, 2017, p. 15)

Link congestion can cause packet loss. This is where one of the devices in the packet’s path has no room in the buffer to queue the packet, so the packet has to be discarded. Increasing available bandwidth can be a resolution to link congestion, this allows buffers can empty quicker to reduce or eliminate queuing.  The use of QoS to prioritize traffic like voice and video can lower the probability of a dropped packet for traffic that does not tolerate packet loss and retransmission.

Bandwidth constraints, too much data on too small of a pipe creates congestion and packet loss.

As you can see above congestion is just like this is like a four-lane road merging into one lane road. Packet loss can be an intentional thing where packets are dropped because a rule is in place to drop packets at a certain limit, hosting providers use this to control how customers use available bandwidth. Packet loss can also just occur because of unintentional congestion, where the traffic just exceeds the available bandwidth.

Device performance can also cause packet loss. This occurs in a situation where you may increase the bandwidth of the route the packet will take, but the device (router, switch, firewall, etc…) is not able to handle the load. In this case, a new device is likely required to support the network load.

For example, a Cisco ASA 5505 (Links to an external site.)Links to an external site. is meant to handle 150 Mbps of throughput, if the device is will likely begin to have issues, maybe the CPU of the device can’t process the throughput and then device experiences congestion and begins dropping packets.

Faulty hardware, software or misconfiguration. Issues can occur here from a faulty component, like an SFP (small form-factor pluggable), a cable, a bug in the device software, or a configuration issue like a duplex mismatch, can cause packet loss.

Examples of software issues which have caused packet loss: (Links to an external site.)Links to an external site.

Network attacks like a Denial of Service (DoS) attack can result in packets being dropped because the attack is overwhelming a device with traffic.

What can protocols do to limit packet loss?

TCP (Transmission Control Protocol) is a connection-oriented protocol which is built to detect packet loss and to retransmit data. The protocol itself is built to handle packet loss.

UDP (User Datagram Protocol) is a connectionless protocol that will not detect packet loss and will not retransmit. We see UDP used for streaming connect like stock ticker data, video feeds, etc… UDP is often used in conjunction with multicast where data is transmitted one-to-many or many-to-many. You can probably visualize the use cases here, and how packet loss can impact the user experience. With UDP data is lost rather than the system experiencing slow or less than optimal response times.

Layer 4 Transport Optimizations

  • RIP (Routing Information Protocol) and BGP (Border Gateway Protocol) make routing (pathing) decisions based on paths, policies, and rules.
  • TCP Proxy and TFO (Traffic Flow Optimization)
  • Compression
  • DRE (Data Redundancy Elimination):  A technique used to reduce traffic by removing redundant data transmission.  This can be extremely useful for chatty protocols like CIFS (SMB).

Layer 2 and 3 Optimizations

  • OSPF (Open Shortest Path First) and IS-IS (Intermediate System to Intermediate System) use link state routing (LSR) algorithms to determine the best route (path).
  • EIGRP (Enhanced Interior Gateway Routing Protocol) is an advanced distance-vectoring routing protocol to automate network routing decisions.
  • Network Segmentation and QoS (Quality of Service): Network congestion is a common cause of packet loss, network segmentation and QoS can ensure that the right traffic is given priority on the network while less critical traffic is dropped.

Are there tools available for providers and consumers that identify the source of packet loss?

There are no hard and fast rules for detecting packet loss on a network but there are tools and an approach that can be followed.

Some tools I use for diagnosis and troubleshooting:

 

References

Bocchinfuso, R. (2008, January 15). Fs Cisco Event V6 Rjb. Retrieved July 13, 2018, from https://www.slideshare.net/rbocchinfuso/fs-cisco-event-v6-rjb

Hurley, M. (2015, April 28). 4 Causes of Packet Loss and How to Fix Them. Retrieved July 13, 2018, from https://www.annese.com/blog/what-causes-packet-loss

Packet Loss – What is it, How to Diagnose and Fix It in your Network. (2018, May 01). Retrieved July 13, 2018, from https://www.pcwdld.com/packet-loss

Wu, Chwan-Hwa (John). Irwin, J. David. (2013). Introduction to computer networks and cybersecurity. Hoboken: CRC Press.

from 2.3 Discussion

Jul 15, 2018 1:31pm

Richard Bocchinfuso

Andrew, good post.  The only comment I would make is to be careful with using ping as the method to diagnose packet loss, great place to start if the problem is really overt, but often the issues are more complex and dropped ICMP packets can be expected behavior because they typically are prioritized out by QoS.

I typically recommend the use of paping (https://code.google.com/archive/p/paping/ (Links to an external site.)Links to an external site.) or hping3 (https://tools.kali.org/information-gathering/hping3 (Links to an external site.)Links to an external site.) to send a TCP vs ICMP request.

If you are going to use ping I would also suggest increasing the ICMP payload size, assuming the target is not rejecting ICMP requests or dropping them because of a QoS policy.

 

Lastly, there are lots of hops between your computer and using MTR is a great way to see where packets are being dropped, where the latency is, etc.

from 2.3 Discussion

Jul 15, 2018 8:50pm

Richard Bocchinfuso

Jonathan, couple of comments on your post.  While TCP packet loss and dropped packets have the same result, a discarded packet requiring retransmission.  Packet loss has an implied context stating that the the discarded packet was unintentional because of the reasons you mention above.  Dropped packets can also be intentional, for example ICMP (ping) traffic is often traffic deprioritized by QoS so these packets are intentionally dropped so they do not impact higher priority traffic.

UDP is a connectionless protocol so there is no ACK from the receiver.  Packets are sent and if they are lost there is no retransmit because there is no way for the protocol to know the packet was not delivered, with UDP data can be lost or delivered out of order.  There are implementations or UDP (e.g. – RUDP) where checks can be added to UDP to increase the reliability of protocol. UDP is often used in conjunction with multicast, if you think about multicast and how TCP and UDP work it becomes obvious why multicast works with a connectionless protocol like UDP and why TCP can only be used in unicast applications.

from 2.3 Discussion

Jul 15, 2018 9:35pm

Richard Bocchinfuso

For anyone looking to play with packet sniffing, regardless of the sniffer it is always good to capture a quality workload, be able to modify your lab environment and replay the workload to see what happens.  Windump (tcpdump (Links to an external site.)Links to an external site. for Windows) is great tools to capture traffic to a pcap file, but I would also become familiar with tcpreplay (Links to an external site.)Links to an external site..  You probably want to trade in that Windows box for Linux, my distro of choice for this sort of work is Parrot Security OS (Links to an external site.)Links to an external site..  There is one Windows tool I really like, called NetworkMiner (Links to an external site.)Links to an external site., check it out.  I would also get familiar with GNS3 (Links to an external site.)Links to an external site. and the NETem (Links to an external site.)Links to an external site. appliance.  So many great tools out there but GNS3 is a critical tool for learning.  Capturing a quality workload to a pcap, modifying your lab network with GNS3 and using tcpreplay to replay the workload while observing behavior provides a great way to experiment and see the impact. Looking ahead, GNS3 provides a way to apply the routing and subnetting theory that it looks like we’ll be diving into in week three.

 

Assignment

[google-drive-embed url=”https://docs.google.com/document/d/1hv4OtK-lxTN-HrsT3sLci5zfuTTFaD6cdFTYL_2acW8/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5157 – Week 2 – Assignment 2″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

FIT – MGT5157 – Week 1

The submissions for this assignment are posts in the assignment’s discussion. Below are the discussion posts for Richard Bocchinfuso, or you can view the full discussion.

What is the internet2? What implications does it hold to the current internet infrastructure?

Super interesting question because while I may have heard of Internet2 years ago I can’t say I ever really knew what it was. I also think it’s interesting in given Tim Berners-Lee’s recent comments on his regrets (Links to an external site.)Links to an external site. about what he was so pivotal in creating, the Internet.

As I read about Internet2, I can’t help, but to think about how it parallels ARPANET and NSFNET. Rather than trying to create a network and pass the first packets like the ARPANET the Internet2 consortium has the goal of innovating new Internet technologies to meet the demands of the modern connected world.

Leonard Kleinrock does a fabulous job explaining what was the first router, a packet switch built by BBN (Bolt Barneck and Newman) and the first message sent across the Internet (the ARPANET then) between UCLA and SRI (Standford Research Institute). (Kleinrock, 2009)

I also highly recommend a documentary called “Lo and Behold, Reveries of the Connected World” (It is on Netflix).

If you have spare time and want to dig deeper Charles Severance has a great Coursera class called “Internet History, Technology, and Security” (Links to an external site.)Links to an external site. which I also recommend.

The Internet2 is both a research and development initiative, but it is a tangible domestic U.S. nationwide carrier class hybrid optical and/or packet network with the goal of supporting research facilities in their development of advanced Internet applications. (Wu & Irwin, 2013, p. 10)

Funny how similar these maps below look; the parallel between the Internet2 map and the NSFNET map is not a coincidence. The infrastructure required to build these networks is owned by few providers and these organizations invest heavily in lobbyists to block new entrants. It’s a game that undoubtedly slows innovation. Just read about the challenges that Google Fiber (Links to an external site.)Links to an external site.had trying to lay Fiber. (Brodkin, 2017)

The Internet2 backbone.

Source:  Wu, Chwan-Hwa (John); Irwin, J. David. Introduction to Computer Networks and Cybersecurity (Page 11). CRC Press. Kindle Edition.

Source:  Wikipedia. (2018, July 03). National Science Foundation Network. Retrieved July 6, 2018, from https://en.wikipedia.org/wiki/National_Science_Foundation_Network

 

Regarding what implications does Internet2 hold to the current internet infrastructure?  Internet2 seems to be focused on Research and Education, not all that different from the objectives of ARPANET, CSNET, and NSFNET.  Internet2 to is aiming to solve the problems of the modern Internet focused on innovating to enable research and education, these include innovations that aim to increase bandwidth, remove bottlenecks, and enable software-defined networking.

The one thing that concerned me is in my research I did not see the role of commercial partners like Netflix and Google.  This concerns me because we live in a time where these two providers alone are responsible for > 50% of Internet traffic.  This means that massive backbone providers like Level 3 and Cogent are carrying a ton of Netflix and Google (more specifically YouTube) traffic.  Unlike the days of ARPANET, commercial entities have a massive role in the evolution and innovation of the Internet.  While CERN is mentioned, I think we would be remiss in not realizing that there is a migration of data, even in research and education to the cloud, which means that Amazon becomes the carriers customer not the research or education institution.

Internet goliaths like Google, Facebook, Netflix, and Amazon are struggling to buy off-the-shelf infrastructure to support their massive needs.  All of these providers are building infrastructure and in many cases open sourcing the how-to documentation.  There is no doubt that we live in interesting technological times.

For example, here is what NASA JPL (Jet Propulsion Laboratory) did with AWS:

With all that said, implications of Internet2 on the current Internet, not much that I can see.  It would seem to me that Internet2 will need to focus on a niche to even remain relevant.

One final thought. Did the Internet2 consortium have something to do with us moving off that prehistoric LMS we were using, to Canvas, if so, keep up the great work.  The ability to create rich media posts, how revolutionary.  ¯\_(ツ)_/¯

References

Brodkin, J. (2017, November 24). AT&T and Comcast lawsuit has nullified a city’s broadband competition law. Retrieved July 6, 2018, from https://arstechnica.com/tech-policy/2017/11/att-and-comcast-win-lawsuit-they-filed-to-stall-google-fiber-in-nashville/

Brooker, K. (2018, July 02). “I Was Devastated”: The Man Who Created the World Wide Web Has Some Regrets. Retrieved July 6, 2018, from https://www.vanityfair.com/news/2018/07/the-man-who-created-the-world-wide-web-has-some-regrets

Kleinrock, L. (2009, January 13). The first Internet connection, with UCLA’s Leonard Kleinrock. Retrieved July 6, 2018, from https://youtu.be/vuiBTJZfeo8

Techopedia. (2018, July 6). What is Internet2? – Definition from Techopedia. Retrieved July 6, 2018, from https://www.techopedia.com/definition/24955/internet2

Wikipedia. (2018, July 03). National Science Foundation Network. Retrieved July 6, 2018, from https://en.wikipedia.org/wiki/National_Science_Foundation_Network

Wu, Chwan-Hwa (John). Irwin, J. David. (2013). Introduction to computer networks and cybersecurity. Hoboken: CRC Press.

Hailey, good post, I enjoyed reading it.  I have to say I wonder how relevant a private research and education network can be in today’s age.  The project seems way underfunded to me given the dollars being put into Internet capacity from huge players in the space.  The other thing that makes me wonder if Internet2 is viable is the fact that it is a domestic network living in an increasingly flat world.  Will research and education institutions using Internet2 connectivity be able to ride the network to Microsoft’s submergible data center?

Just don’t know about Internet2.  Information and mission feel a little dated.  100 Gigabit connectivity is everywhere today, these speeds are no longer just for carrier interconnects, they are everywhere in the modern data center.

The private sector is moving pretty fast and they have to innovate for competitive advantage, the amount of cash being dumped into moonshot idea in the private sector is unprecedented which I think creates an even bigger problem for the long-term viability of Internet2.

James, good post and you make some very good points.  Five years ago most enterprises leveraged private MPLS (Multiprotocol Label Switching) networks to build their WAN (Wide Area Network) for things like intranet communication, unified communications, etc…  This reminds me of the Internet2 value proposition.

Source:  Maupin, 2016

Fast forward to today and MPLS is being supplanted at an alarming rate by technologies like SD-WAN (Software Defined WAN).  Proponents of MPLS argue that once your packets hit the public Internet, you will not be able to guarantee low levels of packet loss, latency, and jitter.  Sound familiar to any of the research on this topic?

OK, this might be somewhat true, you can’t guarantee QoS (Quality of Service) on the internet.  But, now let’s pause for a minute and think about the context of how the market is shifting, cloud-based computing has had a major impact on the industry.  Cloud-based communications companies like 8×8 (Links to an external site.)Links to an external site., where the CEO happens to be a Florida Institute of Technology graduate (Links to an external site.)Links to an external site. have challenged these notions and pushed technologies like SD-WAN to address the issues of packet loss, latency, and jitter that make public Internet circuits a problem in certain use cases.

I always ask myself, would Author Rock (Links to an external site.)Links to an external site. put his money here?  Based on what I know about Internet2, at this point, I would say probably not.

References

Maupin, R. (2016, May 24). Have I designed correctly my MPLS network? Retrieved July 6, 2018, from https://networkengineering.stackexchange.com/questions/30673/have-i-desiged-correctly-my-mpls-network

Assignment

[google-drive-embed url=”https://docs.google.com/document/d/1YgwjXzOziekTZR4vqeZM0S-lQYfuORMntZfKTSFzFQI/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5157 – Week 1 – Assignment 1″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]