In 2014, I installed NDN Forwarding Daemon (NFD) router on a tiny 128MB virtual machine. I named this node ndn6: IPv6 NDN router, because the virtual machine, purchased from the original Low End Spirit forum for €3.00/year, was an IPv6-primary service. I idled this router for three years, and then shut it down in 2017.
I created NDNts: NDN libraries for the modern web in 2019. Since then, I have been publishing my own content over Named Data Networking, most prominently the NDN push-ups. NDNts does not require a local forwarder, so that I operated video repositories by directly connecting to a nearby testbed router via UDP tunnel. Shortly after, I started experimenting with QUIC transport, which involved deploying several NDN-QUIC gateways to translate between NFD's plain UDP packets and Chrome's QUIC transport protocol.
One day, I realized: my content is sent to the global NDN testbed, and then retrieved back to my servers for delivery to browsers over QUIC. My video repository in Buffalo and NDN-QUIC gateway in Montreal are quite close to each other, but the packets are taking a detour to Boston, increasing latency by at least 10ms. Also, since I statically assign a testbed router for each application, a downtime of that router would bring my application offline as well. I thought, instead of operating isolated applications and gateways, I should setup my own NDN network.
Setting up a new NDN network is no small feat. NFD and NLSR implement forwarding and routing, but I also need to:
- Decide on a topology between different routers.
- Assign a name prefix to each router.
- Install and update software in each router.
- Generate configuration files for NFD and NLSR, and modify them as the topology changes.
- Monitor the network and know about ongoing problems.
As you probably know, the global NDN testbed runs on Ubuntu Server systems managed through a bunch of Ansible scripts.
Each node has NFD forwarder, NLSR routing daemon, and many other services installed from a Ubuntu PPA.
I never liked this method, because
apt-get installation pulls in too many dependencies on the host system, and configuration files generated by Jinja2 templates can easily become out of date from the latest software.
I decided to take a different approach: setup my routers using Docker containers.
Currently, I have mostly the same software including NFD and NLSR, but they are of newer version, and the network is managed differently. I'm not publishing my messy scripts yet, but the basic ideas are:
The network topology is defined in a JSON document.
- Each node has a JSON object that defines its name prefix, IP addresses, and enabled services (e.g. whether to run the NDN-QUIC gateway).
- Node names are "flat": I'm using
/yoursunny/_followed by the IATA code of the nearest airport. For example, the node in Warsaw, Poland is named
- The node JSON object also defines any external links, i.e. connections to the global NDN testbed, and prefixes to be announced over each link.
- Each internal link, i.e. link between two of my nodes, has a JSON object that defines its two ends and the link cost.
- Instead of writing an actual JSON file, I'm writing a jq script that prints the JSON document, so that I can write in less strict syntax (e.g. omitting double quotes around property names).
A bash script generates a Docker Compose file from the topology JSON document.
- It checks the local IP address to determine the node's own identity.
- It then extracts the node JSON object and related links, and writes a Compose file that defines a container for each service.
- jq is used extensively for processing JSON documents.
- Compose file is a YAML document, but JSON is valid YAML, so that I can write it with jq.
All internal links are UDP tunnels over IPv6. This is why I named my new network ndn6, reusing the old name.
- NFD container and NDN-QUIC gateway container each has a public IPv6 address.
- The IPv6 subnet is routed to the Docker network bridge. If necessary, the host machine runs ndpresponder to turn on-link IPv6 subnet into routed subnet.
- Using IPv6 avoids the complexity related to IPv4 Network Address Translation (NAT).
- Public access over IPv6 goes into the container directly.
- Public access over IPv4 is permitted by publishing a port through the normal Docker mechanism.
- For a server that lacks native IPv6, I made SIT tunnels to its two neighbors, and borrowed a /112 subnet from one neighbor. This allows me to use the same IPv6-only configuration in Docker containers.
Most NDN software packages are built into a single Docker image.
- The image is based on Debian Bullseye, and contains NFD, NLSR, ndnping, etc from NFD nightly.
- The same image is instantiated as multiple containers in the deployment, where each container runs only one service.
- I choose to put all services in the same image, instead of one image per service, to reduce overall storage usage.
- The NDN-QUIC gateway has a separate Docker image because it's a Python program that is substantially different from the rest.
Configuration files are usually generated during container startup.
- An entrypoint script in the container generates necessary configuration files based on information extracted from the topology JSON document, which is passed into the container as either a mounted file or an environment variable.
- Configuration files are not generated from a template. Instead, they are modified from the default configuration using infoedit and jq.
- If there's a minor change in the configuration schema (e.g. NFD adds a new required field), my script would continue to work without changes.
Secret keys are mounted into the container.
- TLS certificates (used by NDN-QUIC gateway) are obtained via acme.sh installed on the host machine.
- NDN keychains (used by NLSR) are initialized and updated by calling
ndnsecin an ephemeral container.
- I didn't use an NDN certificate management system. To issue a site certificate signed by the root key, I simply copy-paste the certificate request between different servers.
Several programs of ndn6-tools are brought in to help with the setup.
ndn6-prefix-proxyhandles incoming prefix registration commands. Compare to NFD's builtin
/localhop/nfdhandler, it offers precise control over what prefixes can be registered.
ndn6-register-prefix-remotemanages outgoing prefix announcements. Compare to NFD's builtin prefix propagation module, it allows my node to connect to multiple external routers and announce different prefixes to each.
The ndn6 network has been operating for 3 months now. It started with 4 nodes, and has grown to 9 nodes. I've made an informational page and a network map, and integrated ndn6 network into NLSR status page as well as the NDN-FCH 2021 service.
From this exercise, I gained some experience in running an NDN network, and appreciated more toward the work of Dr John DeHart who operates the global NDN testbed. I also found several bugs in the NLSR routing software, and subsequently deployed the fixes.
If you are thinking of setting up a whole NDN network, you know it would be difficult. While this article is not a tutorial, I hope it can give you some hints!