It was a cold spring night and I was bored, so I decided to enter the future of the internet: IPv6.
How?
First, I setup IPv6 advertisements on my main interface (it was switch0
):
set interfaces switch switch0 ipv6 dup-addr-detect-transmits 1set interfaces switch switch0 ipv6 router-advert cur-hop-limit 64set interfaces switch switch0 ipv6 router-advert managed-flag falseset interfaces switch switch0 ipv6 router-advert max-interval 30set interfaces switch switch0 ipv6 router-advert other-config-flag falseset interfaces switch switch0 ipv6 router-advert prefix '::/64' autonomous-flag trueset interfaces switch switch0 ipv6 router-advert prefix '::/64' on-link-flag trueset interfaces switch switch0 ipv6 router-advert prefix '::/64' valid-lifetime 600set interfaces switch switch0 ipv6 router-advert reachable-time 0set interfaces switch switch0 ipv6 router-advert retrans-timer 0set interfaces switch switch0 ipv6 router-advert send-advert true
This gave every client on my network it's own IPv6 address.
Next, I created the firewall rules to allow the tunnel to access my network:
set firewall name WAN_IN rule 100 source address 205.171.2.64set firewall name WAN_IN rule 100 protocol 41set firewall name WAN_IN rule 100 action accept
Since I had a dynamic IP address, I had to make a script to automatically update the IPv6 prefix for the tunnel.
I wanted it to run whenever the PPPoE IP changed, so I put it in the ip-up.d
directory:
mkdir -p /config/scripts/ppp/ip-up.dvi /config/scripts/ppp/ip-up.d/6rd-up
And the script:
#!/bin/bashipv4addr=$(curl -4 https://icanhazip.com)ipv6addr="$(printf "2602:%02x:%02x%02x:%02x00::1\n" $(echo $ipv4addr | tr . ' '))"echo "$ipv4addr -> $ipv6addr"source /opt/vyatta/etc/functions/script-templateconfiguredelete interfaces switch switch0 address# You'll want to change this to your IPv4 subnetset interfaces switch switch0 address 10.0.0.1/24set interfaces switch switch0 address ${ipv6addr}/64delete interfaces tunnel tun0set interfaces tunnel tun0 6rd-default-gw ::205.171.2.64set interfaces tunnel tun0 6rd-prefix '2602::/24'set interfaces tunnel tun0 address ${ipv6addr}/24set interfaces tunnel tun0 description 'CenturyLink IPv6 6rd tunnel'set interfaces tunnel tun0 encapsulation sitset interfaces tunnel tun0 local-ip $ipv4addrset interfaces tunnel tun0 mtu 1472set interfaces tunnel tun0 multicast disableset interfaces tunnel tun0 ttl 255commit# Firewall has to be set after tunnel is initializedset interfaces tunnel tun0 firewall in ipv6-name WANv6_INset interfaces tunnel tun0 firewall local ipv6-name WANv6_LOCALcommitsaveexit
At last, after a reboot, I finally had access to a whole new world: 2^128 of address space.
Performance
I was curious if the 6RD tunnel had any negative (or positive) performance impact so I ran some tests:
Latency
For the latency test, I simply ran ping -n 20 <ip>
and took the average:
google.com (Seattle, WA, US)
First, I tested Google which is obviously going to have good results due to their proximity. This is mostly to check if the tunnel adds any inherent latency.
- IPv4: min 5ms, avg 5ms, max 5ms
- IPv6: min 5ms, avg 5ms, max 6ms
Not much to say here.
Scaleway VPS (Paris, FR)
Next, I tested a server in Paris (5,000+ miles away) to see if there was a bigger difference.
- IPv4: min 150ms, avg 150ms, max 151ms
- IPv6: min 145ms, avg 145ms, max 146ms
There's a reproducible 5ms decrease in latency.
Speed
For reference, I have a 1Gbps fiber connection and the following measurements are all megabits per second.
Fast.com/Netflix (Longview, WA, US)
I used fast.com—which is run by Netflix—for the first speed test.
- IPv4: down 940mbps, up 910mbps
- IPv6: down 770mbps, up 680mbps
Here we can see where the 6RD tunnel takes a real performance hit. 200-300mbps less.
ipv6-test.com (Paris, FR)
For the second speed test, I used ipv6-test.com which directly compares IPv4 vs IPv6 performance.
- IPv4: down 50mbps
- IPv6: down 454mbps
There is a ~400mbps improvement when compared to IPv4.
Let's try out the Quebec location as well:
- IPv4: down 30mbps
- IPv6: down 342mbps
Same thing here, a massive increase in throughput.
Conclusion
There doesn't seem to be a noticeable difference in daily browsing, but the Fast.com speed test is concerning, as I do a lot of egress/ingress over my connection and want to maximize throughput.
However the latency and throughput to distant locations seems is much better. I'd assume this is due to the tunnel having better peering to these locations. If you frequently download from servers in Europe, you may benefit from enabling the tunnel.
References
Thanks to the following resources from which some of this article was adapted:
- https://github.com/cpcowart/ubiquiti-scripts/blob/master/centurylink-6rd.md
- https://community.ubnt.com/t5/EdgeRouter/Centurylink-1gig-PPOE-6rdd-IPV6-generation-issue/td-p/1558563
If you followed this (tutorial?) and can't quite get it to work, feel free to comment below and I'll try my best to help.