Richard J. Bocchinfuso

"Be yourself; everyone else is already taken." – Oscar Wilde

FIT – MGT5157 – Week 3

The submissions for this assignment are posts in the assignment’s discussion. Below are the discussion posts for Richard Bocchinfuso, or you can view the full discussion.

Discussion: Describe the differences between IPv6 and IPv4. What implications does it have on networks? On the user? What could be done to speed up the transition process?

First let’s talk about a major catalyst for the development and adoption of IPv6, the idea that the internet would exhaust the available IP address space. This prediction was made back in 2011 and it was stated the Internet would exhaust all available IP addresses by 4 AM on February 2, 2011. (Kessler, 2011) Here we are 2725 days later and the “IPcalypse” or “ARPAgeddon” has yet to happen, in-fact @IPv4Countdown (Links to an external site.)Links to an external site. is still foreshadowing the IPv4 doomsday scenarios via twitter. So what is the deal? Well, it’s true the available IPv4 address space is limited and with a pool of addresses of slightly less than 4.3 billion (2^32, more on this later) (Links to an external site.)Links to an external site.. It is important to remember that many of these predictions predate Al Gore taking credit for creating the internet. Sorry Bob Kahn and Vint Cert (Links to an external site.)Links to an external site., it was Al Gore who made this happen.

Back in the 1990s we didn’t have visibility to technologies like CIDR (Classless Interdomain Routing) (Links to an external site.)Links to an external site. and NAT (Network Address Translation) (Links to an external site.)Links to an external site.. In addition many us today use techniques like reverse proxying and proxy ARPing. Simplistically this allows something like NGINX (Links to an external site.)Links to an external site. to act as a proxy (middleman) where all services can be placed on a single port behind a single public IP address and traffic can be appropriately routed and proxied using a single public IP address.

For example, a snippet of an NGINX reverse proxy config might look something like this:

server {
    listen 80;
    server_name site.foo.com;
    location / {
        access_log on;
        client_max_body_size 500 M;
        proxy_pass http: //INTERNAL_HOSTNAME_OR_IP;
            proxy_set_header X - Real - IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X - Forwarded - For $proxy_add_x_forwarded_for;
    }
}

server {
    listen 80;
    server_name site.bar.com;
    location / {
        access_log on;
        client_max_body_size 500 M;
        proxy_pass http: //INTERNAL_HOSTNAME_OR_IP;
            proxy_set_header X - Real - IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X - Forwarded - For $proxy_add_x_forwarded_for;
    }
}

Let’s assume that there are two DNS A records (Links to an external site.)Links to an external site., one for site.bar.com and one for site.foo.bar that both point to the same IP address with a web server running on port 80 on both machines.  How does site.bar.com know to go to web server A and site.foo.bar know to go to web server B? The answer is a reverse proxy which can proxy the request, this is what we see above.

I use this configuration from two sites which I host bocchinfuso.net and gotitsolutions.org

A dig (domain information groper) of both of these domains reveals that their A records point to the same IP address, the NGINX (Links to an external site.)Links to an external site. reverse proxy does the work to route to the proper server or services based on the requested server name and proxies the traffic back to the client. nslookup would work as well if you would like to try but dig a little cleaner display for posting below.

$ dig bocchinfuso.net A +short
173.63.111.136
$ dig gotitsolutions.org A +short
173.63.111.136

NGINX (Links to an external site.)Links to an external site. is a popular web server, which can also be used for reverse proxying like I am using it above, as well as load-balancing.

IPv6 (Internet Protocol version 6) is the next generation or successor to IPv4 (Intenet Protocol version 4). IPv4 is responsible for assigning a numerical address using four octets which are each 8-bits to comprise a 32-bit address. IPv4 addresses are comprised of 4 numbers between 0 and 255.

Source:  ZeusDB. (2015, July 30). Understanding IP Addresses – IPv4 vs IPv6.

IPv6 addresses consist of eight x 16-bit segments to comprise a 128-bit address, giving IPv6 a total address space of 2^128 (~ 340.3 undecillion) (Links to an external site.)Links to an external site. which is a pretty big address space. To put 2^128 into perspective it is enough available IP address space for every person on the planet to personally have 2^95 or about 39.6 octillion IP addresses (Links to an external site.)Links to an external site.. That’s a lot of IP address space.

Source:  ZeusDB. (2015, July 30). Understanding IP Addresses – IPv4 vs IPv6.

One of the challenges with IPv6 is that it is not easily interchangeable with IPv4, this has slowed adoption and with the use of proxy, tunneling, etc. technology I believe the sense of urgency is not what it once was. IPv6 adoption has been slow, but with the rapid adoption of IoT and the number of devices being brought online we could begin to see a significant increase in the IPv6 adoption rate. In 2002 Cisco forecasted that IPv6 would be fully adopted by 2007.

Source:  Pingdom. (2009, March 06). A crisis in the making: Only 4% of the Internet supports IPv6.

The Internet Society State of IPv6 Deployment 2017 paper states that ~ 9 million domains and 23% of networks are advertising IPv6 connectivity. When we look at the adoption of IPv6 I think this table does a nice job outlining the where IPv4 and IPv6 sit relative to each other.

Source:  Internet Society. (2017, May 25). State of IPv6 Deployment 2017.

The move to IPv6 will be nearly invisible from a user perspective, our carriers (cable modems, cellular devices, etc…) abstract us from the underpinnings of how things work. Our request to google.com will magically resolve to an IPv6 address vs an IPv4 address and it won’t matter to the user.

For example here is a dig of google.com to return google[dot]com’s IPv4 and IPv6 address.

$ dig google.com A google.com AAAA +short
172.217.3.46
2607:f8b0:4004:80e::200e

Note: If you’re a Linux user you know how to use dig, MacOS should have dig and if you’re on Windows and don’t already know how to get access to dig the easier path can be found here: https://www.danesparza.net/2011/05/using-the-dig-dns-tool-on-windows-7/ (Links to an external site.)Links to an external site.

The adoption rate if IPv^ could be increased by simplifying interoperability between IPv4 and IPv6. The exhaustion of the IPv4 address space and the exponential increase in connected devices is upon us and this may be the catalyst the industry needs to simplify interoperability and speed adoption.

With the above said, interestingly IPv6 adoption is slowing.

McCarthy, K. (2018, May 22). IPv6 growth is slowing and no one knows why. Let’s see if El Reg can address what’s going on.

I think it’s a chicken or the egg situation.  There have been IPv4 address space concerns for years, the heavy lift required to adopt IPv6 led to slow and low adoption rates which pushed innovation in a different direction. With the use of a reverse proxy maybe I don’t need any more public address space, etc… Only time will tell, but this is foundational infrastructure akin to the interstate highway system, change will be a long journey and it’s possible we will start to build new infrastructure before we ever reach the destination.

 

References

Hogg, S. (2015, September 22). ARIN Finally Runs Out of IPv4 Addresses. Retrieved July 20, 2018, from https://www.networkworld.com/article/2985340/ipv6/arin-finally-runs-out-of-ipv4-addresses.html

Internet Society. (2017, May 25). State of IPv6 Deployment 2017. Retrieved July 20, 2018, from https://www.internetsociety.org/resources/doc/2017/state-of-ipv6-deployment-2017/

Kessler, S. (2011, January 22). The Internet Is Running Out of Space…Kind Of. Retrieved July 20, 2018, from https://mashable.com/2011/01/22/the-internet-is-running-out-of-space-kind-of/#49ZaFObrqPqW

McCarthy, K. (2018, May 22). IPv6 growth is slowing and no one knows why. Let’s see if El Reg can address what’s going on. Retrieved July 20, 2018, from https://www.theregister.co.uk/2018/05/21/ipv6_growth_is_slowing_and_no_one_knows_why/

NGINX. (2018, July 20). High Performance Load Balancer, Web Server, & Reverse Proxy. Retrieved July 20, 2018, from https://www.nginx.com/

Pingdom. (2009, March 06). A crisis in the making: Only 4% of the Internet supports IPv6. Retrieved July 20, 2018, from https://royal.pingdom.com/2009/03/06/a-crisis-in-the-making-only-4-of-the-internet-supports-ipv6/

Pingdom. (2017, August 22). Tongue twister: The number of possible IPv6 addresses read out loud. Retrieved July 20, 2018, from https://royal.pingdom.com/2009/05/26/the-number-of-possible-ipv6-addresses-read-out-loud/

Wigmore, I. (2009, January 14). IPv6 addresses – how many is that in numbers? Retrieved July 20, 2018, from https://itknowledgeexchange.techtarget.com/whatis/ipv6-addresses-how-many-is-that-in-numbers/

ZeusDB. (2015, July 30). Understanding IP Addresses – IPv4 vs IPv6. Retrieved July 20, 2018, from https://www.zeusdb.com/blog/understanding-ip-addresses-ipv4-vs-ipv6/

Yacine, NAT certainly has helped ease the IPv4 address space issue, as did other things like proxy ARPing (Links to an external site.)Links to an external site. and reverse proxying (Links to an external site.)Links to an external site., all techniques to use less address space (also pretty important for network security).

arping can be a handy little tool to see if you can contact a system and what MAC address it is arping on.

> arp-ping.exe -s 0.0.0.0 192.168.30.15
Reply that B8:CA:3A:D1:7E:AB is 192.168.30.15 in 4.604ms
Reply that B8:CA:3A:D1:7E:AB is 192.168.30.15 in 15.745ms
Reply that B8:CA:3A:D1:7E:AB is 192.168.30.15 in 15.642ms
Reply that B8:CA:3A:D1:7E:AB is 192.168.30.15 in 15.623ms

While IPv6 may provide a ton of IP address space, I don’t think the use of NAT and poxies will change, these techniques are as much about security as they are extending the address space.

James, love the profile pic.  Setting a hard date to kill IPv4 is a stick no carrot.  The IPv6 shift discussion needs to be driven by the market makers, they should make it compelling enough for enterprises to begin moving faster.  The market makers can make a huge impact, Netflix accounts for > 1/3 of all internet traffic (Links to an external site.)Links to an external site., people a rushing to AWS, Azure and GCP at alarming rates and the only procurers of tech that really matter are Amazon, Apple, Facebook, Alphabet, Microsoft, Tencent and Alibaba.  If the market makers move everyone else will follow, they will have no choice.  Why aren’t they moving faster?

This is further compounded by the fact that Cisco, Juniper, Arista or any other mainstream networking equipment provider are not mentioned above.  It’s no secret that Amazon, Facebook, and others are running their own intellectual property to solve lots of legacy networking issues.  Facebook is building and deploying their own switches and load balancers (Links to an external site.)Links to an external site. and AWS wrote their own networking stack because VPC needs could not be handled by traditional networking provider VLANs and overlay networks.  Now we are seeing the adoption of SDN (Links to an external site.)Links to an external site. increase which could speed up IPv6 adoption of could slow it down.

Assignment

[google-drive-embed url=”https://docs.google.com/document/d/1cLkTRQEDEoD6v49Ywu7Jkarc5T4FE-ggc0Mc91KG6H8/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5157 – Week 3 – Assignment 3″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

FIT – MGT5157 – Week 2

The submissions for this assignment are posts in the assignment’s discussion. Below are the discussion posts for Richard Bocchinfuso, or you can view the full discussion.

from 2.3 Discussion

Jul 13, 2018 10:48pm

Richard Bocchinfuso

What is the reason behind packet loss? What can protocols do to limit packet loss? Are there tools available for providers and consumers that identify the source of packet loss?

What is the reason behind packet loss?

“A primary cause of packet loss is the finite size of the buffer involved”  (Wu & Irwin, 2017, p. 15)

Link congestion can cause packet loss. This is where one of the devices in the packet’s path has no room in the buffer to queue the packet, so the packet has to be discarded. Increasing available bandwidth can be a resolution to link congestion, this allows buffers can empty quicker to reduce or eliminate queuing.  The use of QoS to prioritize traffic like voice and video can lower the probability of a dropped packet for traffic that does not tolerate packet loss and retransmission.

Bandwidth constraints, too much data on too small of a pipe creates congestion and packet loss.

As you can see above congestion is just like this is like a four-lane road merging into one lane road. Packet loss can be an intentional thing where packets are dropped because a rule is in place to drop packets at a certain limit, hosting providers use this to control how customers use available bandwidth. Packet loss can also just occur because of unintentional congestion, where the traffic just exceeds the available bandwidth.

Device performance can also cause packet loss. This occurs in a situation where you may increase the bandwidth of the route the packet will take, but the device (router, switch, firewall, etc…) is not able to handle the load. In this case, a new device is likely required to support the network load.

For example, a Cisco ASA 5505 (Links to an external site.)Links to an external site. is meant to handle 150 Mbps of throughput, if the device is will likely begin to have issues, maybe the CPU of the device can’t process the throughput and then device experiences congestion and begins dropping packets.

Faulty hardware, software or misconfiguration. Issues can occur here from a faulty component, like an SFP (small form-factor pluggable), a cable, a bug in the device software, or a configuration issue like a duplex mismatch, can cause packet loss.

Examples of software issues which have caused packet loss: (Links to an external site.)Links to an external site.

Network attacks like a Denial of Service (DoS) attack can result in packets being dropped because the attack is overwhelming a device with traffic.

What can protocols do to limit packet loss?

TCP (Transmission Control Protocol) is a connection-oriented protocol which is built to detect packet loss and to retransmit data. The protocol itself is built to handle packet loss.

UDP (User Datagram Protocol) is a connectionless protocol that will not detect packet loss and will not retransmit. We see UDP used for streaming connect like stock ticker data, video feeds, etc… UDP is often used in conjunction with multicast where data is transmitted one-to-many or many-to-many. You can probably visualize the use cases here, and how packet loss can impact the user experience. With UDP data is lost rather than the system experiencing slow or less than optimal response times.

Layer 4 Transport Optimizations

  • RIP (Routing Information Protocol) and BGP (Border Gateway Protocol) make routing (pathing) decisions based on paths, policies, and rules.
  • TCP Proxy and TFO (Traffic Flow Optimization)
  • Compression
  • DRE (Data Redundancy Elimination):  A technique used to reduce traffic by removing redundant data transmission.  This can be extremely useful for chatty protocols like CIFS (SMB).

Layer 2 and 3 Optimizations

  • OSPF (Open Shortest Path First) and IS-IS (Intermediate System to Intermediate System) use link state routing (LSR) algorithms to determine the best route (path).
  • EIGRP (Enhanced Interior Gateway Routing Protocol) is an advanced distance-vectoring routing protocol to automate network routing decisions.
  • Network Segmentation and QoS (Quality of Service): Network congestion is a common cause of packet loss, network segmentation and QoS can ensure that the right traffic is given priority on the network while less critical traffic is dropped.

Are there tools available for providers and consumers that identify the source of packet loss?

There are no hard and fast rules for detecting packet loss on a network but there are tools and an approach that can be followed.

Some tools I use for diagnosis and troubleshooting:

 

References

Bocchinfuso, R. (2008, January 15). Fs Cisco Event V6 Rjb. Retrieved July 13, 2018, from https://www.slideshare.net/rbocchinfuso/fs-cisco-event-v6-rjb

Hurley, M. (2015, April 28). 4 Causes of Packet Loss and How to Fix Them. Retrieved July 13, 2018, from https://www.annese.com/blog/what-causes-packet-loss

Packet Loss – What is it, How to Diagnose and Fix It in your Network. (2018, May 01). Retrieved July 13, 2018, from https://www.pcwdld.com/packet-loss

Wu, Chwan-Hwa (John). Irwin, J. David. (2013). Introduction to computer networks and cybersecurity. Hoboken: CRC Press.

from 2.3 Discussion

Jul 15, 2018 1:31pm

Richard Bocchinfuso

Andrew, good post.  The only comment I would make is to be careful with using ping as the method to diagnose packet loss, great place to start if the problem is really overt, but often the issues are more complex and dropped ICMP packets can be expected behavior because they typically are prioritized out by QoS.

I typically recommend the use of paping (https://code.google.com/archive/p/paping/ (Links to an external site.)Links to an external site.) or hping3 (https://tools.kali.org/information-gathering/hping3 (Links to an external site.)Links to an external site.) to send a TCP vs ICMP request.

If you are going to use ping I would also suggest increasing the ICMP payload size, assuming the target is not rejecting ICMP requests or dropping them because of a QoS policy.

 

Lastly, there are lots of hops between your computer and using MTR is a great way to see where packets are being dropped, where the latency is, etc.

from 2.3 Discussion

Jul 15, 2018 8:50pm

Richard Bocchinfuso

Jonathan, couple of comments on your post.  While TCP packet loss and dropped packets have the same result, a discarded packet requiring retransmission.  Packet loss has an implied context stating that the the discarded packet was unintentional because of the reasons you mention above.  Dropped packets can also be intentional, for example ICMP (ping) traffic is often traffic deprioritized by QoS so these packets are intentionally dropped so they do not impact higher priority traffic.

UDP is a connectionless protocol so there is no ACK from the receiver.  Packets are sent and if they are lost there is no retransmit because there is no way for the protocol to know the packet was not delivered, with UDP data can be lost or delivered out of order.  There are implementations or UDP (e.g. – RUDP) where checks can be added to UDP to increase the reliability of protocol. UDP is often used in conjunction with multicast, if you think about multicast and how TCP and UDP work it becomes obvious why multicast works with a connectionless protocol like UDP and why TCP can only be used in unicast applications.

from 2.3 Discussion

Jul 15, 2018 9:35pm

Richard Bocchinfuso

For anyone looking to play with packet sniffing, regardless of the sniffer it is always good to capture a quality workload, be able to modify your lab environment and replay the workload to see what happens.  Windump (tcpdump (Links to an external site.)Links to an external site. for Windows) is great tools to capture traffic to a pcap file, but I would also become familiar with tcpreplay (Links to an external site.)Links to an external site..  You probably want to trade in that Windows box for Linux, my distro of choice for this sort of work is Parrot Security OS (Links to an external site.)Links to an external site..  There is one Windows tool I really like, called NetworkMiner (Links to an external site.)Links to an external site., check it out.  I would also get familiar with GNS3 (Links to an external site.)Links to an external site. and the NETem (Links to an external site.)Links to an external site. appliance.  So many great tools out there but GNS3 is a critical tool for learning.  Capturing a quality workload to a pcap, modifying your lab network with GNS3 and using tcpreplay to replay the workload while observing behavior provides a great way to experiment and see the impact. Looking ahead, GNS3 provides a way to apply the routing and subnetting theory that it looks like we’ll be diving into in week three.

 

Assignment

[google-drive-embed url=”https://docs.google.com/document/d/1hv4OtK-lxTN-HrsT3sLci5zfuTTFaD6cdFTYL_2acW8/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5157 – Week 2 – Assignment 2″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]

FIT – MGT5157 – Week 1

The submissions for this assignment are posts in the assignment’s discussion. Below are the discussion posts for Richard Bocchinfuso, or you can view the full discussion.

What is the internet2? What implications does it hold to the current internet infrastructure?

Super interesting question because while I may have heard of Internet2 years ago I can’t say I ever really knew what it was. I also think it’s interesting in given Tim Berners-Lee’s recent comments on his regrets (Links to an external site.)Links to an external site. about what he was so pivotal in creating, the Internet.

As I read about Internet2, I can’t help, but to think about how it parallels ARPANET and NSFNET. Rather than trying to create a network and pass the first packets like the ARPANET the Internet2 consortium has the goal of innovating new Internet technologies to meet the demands of the modern connected world.

Leonard Kleinrock does a fabulous job explaining what was the first router, a packet switch built by BBN (Bolt Barneck and Newman) and the first message sent across the Internet (the ARPANET then) between UCLA and SRI (Standford Research Institute). (Kleinrock, 2009)

I also highly recommend a documentary called “Lo and Behold, Reveries of the Connected World” (It is on Netflix).

If you have spare time and want to dig deeper Charles Severance has a great Coursera class called “Internet History, Technology, and Security” (Links to an external site.)Links to an external site. which I also recommend.

The Internet2 is both a research and development initiative, but it is a tangible domestic U.S. nationwide carrier class hybrid optical and/or packet network with the goal of supporting research facilities in their development of advanced Internet applications. (Wu & Irwin, 2013, p. 10)

Funny how similar these maps below look; the parallel between the Internet2 map and the NSFNET map is not a coincidence. The infrastructure required to build these networks is owned by few providers and these organizations invest heavily in lobbyists to block new entrants. It’s a game that undoubtedly slows innovation. Just read about the challenges that Google Fiber (Links to an external site.)Links to an external site.had trying to lay Fiber. (Brodkin, 2017)

The Internet2 backbone.

Source:  Wu, Chwan-Hwa (John); Irwin, J. David. Introduction to Computer Networks and Cybersecurity (Page 11). CRC Press. Kindle Edition.

Source:  Wikipedia. (2018, July 03). National Science Foundation Network. Retrieved July 6, 2018, from https://en.wikipedia.org/wiki/National_Science_Foundation_Network

 

Regarding what implications does Internet2 hold to the current internet infrastructure?  Internet2 seems to be focused on Research and Education, not all that different from the objectives of ARPANET, CSNET, and NSFNET.  Internet2 to is aiming to solve the problems of the modern Internet focused on innovating to enable research and education, these include innovations that aim to increase bandwidth, remove bottlenecks, and enable software-defined networking.

The one thing that concerned me is in my research I did not see the role of commercial partners like Netflix and Google.  This concerns me because we live in a time where these two providers alone are responsible for > 50% of Internet traffic.  This means that massive backbone providers like Level 3 and Cogent are carrying a ton of Netflix and Google (more specifically YouTube) traffic.  Unlike the days of ARPANET, commercial entities have a massive role in the evolution and innovation of the Internet.  While CERN is mentioned, I think we would be remiss in not realizing that there is a migration of data, even in research and education to the cloud, which means that Amazon becomes the carriers customer not the research or education institution.

Internet goliaths like Google, Facebook, Netflix, and Amazon are struggling to buy off-the-shelf infrastructure to support their massive needs.  All of these providers are building infrastructure and in many cases open sourcing the how-to documentation.  There is no doubt that we live in interesting technological times.

For example, here is what NASA JPL (Jet Propulsion Laboratory) did with AWS:

With all that said, implications of Internet2 on the current Internet, not much that I can see.  It would seem to me that Internet2 will need to focus on a niche to even remain relevant.

One final thought. Did the Internet2 consortium have something to do with us moving off that prehistoric LMS we were using, to Canvas, if so, keep up the great work.  The ability to create rich media posts, how revolutionary.  ¯\_(ツ)_/¯

References

Brodkin, J. (2017, November 24). AT&T and Comcast lawsuit has nullified a city’s broadband competition law. Retrieved July 6, 2018, from https://arstechnica.com/tech-policy/2017/11/att-and-comcast-win-lawsuit-they-filed-to-stall-google-fiber-in-nashville/

Brooker, K. (2018, July 02). “I Was Devastated”: The Man Who Created the World Wide Web Has Some Regrets. Retrieved July 6, 2018, from https://www.vanityfair.com/news/2018/07/the-man-who-created-the-world-wide-web-has-some-regrets

Kleinrock, L. (2009, January 13). The first Internet connection, with UCLA’s Leonard Kleinrock. Retrieved July 6, 2018, from https://youtu.be/vuiBTJZfeo8

Techopedia. (2018, July 6). What is Internet2? – Definition from Techopedia. Retrieved July 6, 2018, from https://www.techopedia.com/definition/24955/internet2

Wikipedia. (2018, July 03). National Science Foundation Network. Retrieved July 6, 2018, from https://en.wikipedia.org/wiki/National_Science_Foundation_Network

Wu, Chwan-Hwa (John). Irwin, J. David. (2013). Introduction to computer networks and cybersecurity. Hoboken: CRC Press.

Hailey, good post, I enjoyed reading it.  I have to say I wonder how relevant a private research and education network can be in today’s age.  The project seems way underfunded to me given the dollars being put into Internet capacity from huge players in the space.  The other thing that makes me wonder if Internet2 is viable is the fact that it is a domestic network living in an increasingly flat world.  Will research and education institutions using Internet2 connectivity be able to ride the network to Microsoft’s submergible data center?

Just don’t know about Internet2.  Information and mission feel a little dated.  100 Gigabit connectivity is everywhere today, these speeds are no longer just for carrier interconnects, they are everywhere in the modern data center.

The private sector is moving pretty fast and they have to innovate for competitive advantage, the amount of cash being dumped into moonshot idea in the private sector is unprecedented which I think creates an even bigger problem for the long-term viability of Internet2.

James, good post and you make some very good points.  Five years ago most enterprises leveraged private MPLS (Multiprotocol Label Switching) networks to build their WAN (Wide Area Network) for things like intranet communication, unified communications, etc…  This reminds me of the Internet2 value proposition.

Source:  Maupin, 2016

Fast forward to today and MPLS is being supplanted at an alarming rate by technologies like SD-WAN (Software Defined WAN).  Proponents of MPLS argue that once your packets hit the public Internet, you will not be able to guarantee low levels of packet loss, latency, and jitter.  Sound familiar to any of the research on this topic?

OK, this might be somewhat true, you can’t guarantee QoS (Quality of Service) on the internet.  But, now let’s pause for a minute and think about the context of how the market is shifting, cloud-based computing has had a major impact on the industry.  Cloud-based communications companies like 8×8 (Links to an external site.)Links to an external site., where the CEO happens to be a Florida Institute of Technology graduate (Links to an external site.)Links to an external site. have challenged these notions and pushed technologies like SD-WAN to address the issues of packet loss, latency, and jitter that make public Internet circuits a problem in certain use cases.

I always ask myself, would Author Rock (Links to an external site.)Links to an external site. put his money here?  Based on what I know about Internet2, at this point, I would say probably not.

References

Maupin, R. (2016, May 24). Have I designed correctly my MPLS network? Retrieved July 6, 2018, from https://networkengineering.stackexchange.com/questions/30673/have-i-desiged-correctly-my-mpls-network

Assignment

[google-drive-embed url=”https://docs.google.com/document/d/1YgwjXzOziekTZR4vqeZM0S-lQYfuORMntZfKTSFzFQI/preview?usp=drivesdk” title=”Bocchinfuso – FIT – MGT5157 – Week 1 – Assignment 1″ icon=”https://drive-thirdparty.googleusercontent.com/16/type/application/vnd.google-apps.document” width=”100%” height=”400″ style=”embed”]