Use NFD nightly with Mini-NDN

Mini-NDN is a network emulation tool that enables testing, experimentation, and research on the Named Data Networking (NDN) platform. It uses container technology to emulate a small-to-medium NDN network topology. Each container represents a network node, in which it runs NDN Forwarding Daemon (NFD), NLSR routing daemon, and other NDN programs. Virtual Ethernet adapters are added between containers to simulated network links.

During the recent 12th NDN hackathon, I worked with my buddy Saurab Dulal to improve Mini-NDN. One of our achievements was a shiny new Mini-NDN installation script. The new script can install NDN software binary packages from named-data PPA, which saves time significantly compared to the alternative of compiling from source code.

However, a drawback of named-data PPA is that, the binary packages available there are only updated after each NFD release, which occurs a few times a year. If it has been several months since the release, the PPA packages would be ancient. They would not include the latest features, improvements, and bug fixes in the NDN codebase.

If you want to use up-to-date NDN software, but do not want to wait for the software to compile from source, I can offer another option: install the weekly automated builds from NFD-nightly. This article explains how to do that.

Operating System

NFD nightly APT repository

This article contains instructions of NFD nightly APT repository, which provides automated builds of NDN Forwarding Daemon (NFD) and related software.

This article was latest updated on 2021-10-30 to reflect latest changes.

Instructions

To install NDN software from NFD nightly APT repository:

  1. If you have previously installed NDN software from named-data PPA or source code, you need to delete them to avoid conflict. See switch between installation methods section below.

  2. Visit https://nfd-nightly.ndn.today webpage, choose your operating system and CPU architecture, and you'll get a setup command.

    Run this setup command in the terminal, which adds NFD-nightly as a package source.

  3. Update package lists:

    sudo apt update
  4. Install desired packages, such as:

    sudo apt install nfd ndnping ndnpeek infoedit

    You can see available packages on https://nfd-nightly.ndn.today webpage.

NDN Video Streaming on the ndn6 Network

The ndn6 network, my own global scale Named Data Networking network, came back earlier this year. I moved my NDNts video streaming app into the ndn6 network, to reduce dependency on the NDN testbed. How well is it performing?

QUIC ⇒ HTTP/3

In my last article "NDN video streaming over QUIC", I used Chrome browser's experimental QuicTransport feature to perform video streaming over Named Data Networking. The analysis revealed that QUIC transport was generally performing better than WebSockets in this application, according to metrics including video resolution and startup latency.

Web technologies are constantly evolving. QuicTransport was in Origin Trial status at the time, but it was discontinued as of Chrome 91. WebTransport was introduced in its place. The main difference is that, WebTransport uses HTTP/3 as the underlying network protocol, while QuicTransport uses QUIC datagrams.

Since HTTP/3 runs over QUIC, I expect no performance difference between the two. I promptly registered for the WebTransport Origin Trial, and updated my gateways and NDNts libraries to use the new API.

Return of the ndn6 Network

In 2014, I installed NDN Forwarding Daemon (NFD) router on a tiny 128MB virtual machine. I named this node ndn6: IPv6 NDN router, because the virtual machine, purchased from the original Low End Spirit forum for €3.00/year, was an IPv6-primary service. I idled this router for three years, and then shut it down in 2017.

I created NDNts: NDN libraries for the modern web in 2019. Since then, I have been publishing my own content over Named Data Networking, most prominently the NDN push-ups. NDNts does not require a local forwarder, so that I operated video repositories by directly connecting to a nearby testbed router via UDP tunnel. Shortly after, I started experimenting with QUIC transport, which involved deploying several NDN-QUIC gateways to translate between NFD's plain UDP packets and Chrome's QUIC transport protocol.

One day, I realized: my content is sent to the global NDN testbed, and then retrieved back to my servers for delivery to browsers over QUIC. My video repository in Buffalo and NDN-QUIC gateway in Montreal are quite close to each other, but the packets are taking a detour to Boston, increasing latency by at least 10ms. Also, since I statically assign a testbed router for each application, a downtime of that router would bring my application offline as well. I thought, instead of operating isolated applications and gateways, I should setup my own NDN network.

Setting up a new NDN network is no small feat. NFD and NLSR implement forwarding and routing, but I also need to:

  • Decide on a topology between different routers.
  • Assign a name prefix to each router.
  • Install and update software in each router.
  • Generate configuration files for NFD and NLSR, and modify them as the topology changes.
  • Monitor the network and know about ongoing problems.

Today I Learned: openat()

fopen and open

In C programming language, the <stdio.h> header supplies functions for file input and output. To open a file, we usually use the fopen function. It is defined by the C language standard and works in every operating system.

Working at a lower level, there's also the open function. It is a system call provided by the Linux kernel and exposed through glibc.

Both fopen and open have an input parameter: the file pathname, as a NUL-terminated string. These two functions are declared like this:

FILE* fopen(const char* filename, const char* mode);

int open(const char* pathname, int flags);

Stardate 1302.8

Stardate 1302.8, San Francisco.

Just as I'm ready to shut the doors of yoursunny summer host sales office and spend the weekend with my snake collection, a customer rushed in. The customer shouted hurriedly: my storage server in Midwest Eurasia is involucrated, routing announcement is withdrawn, all my family memories are gone, can you help?

I opened my sleepy eyes, downed a bottle of Red Bull, and immediately started helping this customer. I sent a robot to the garden where her storage server was located, and provisioned a new storage server in our luna location. Within seconds, the robot found the original server and plugged into its USB port, and her precious data were being cloned to our infrastructure. At the same time, I was busy engraving a golden key card that contains a private key to decrypt the data.

15 minutes later, data restoration was complete, and I opened up the portal room. The customer swiped the card, a colorful cloud appeared on the portal. Inside, there's a faint wall of text:

GO
GO
GO
GO
GO

Deep Atlantic Storage: Streaming Bits

I'm bored on 4th of July holiday, so I made a wacky webpage: Deep Atlantic Storage. It is described as a free file storage service, where you can upload any file to be stored deep in the Atlantic Ocean, with no size limit and content restriction whatsoever. How does it work, and how can I afford to provide it?

This article is the third of a 3-part series that reveals the secrets behind Deep Atlantic Storage. The first part revealed that the uploaded file is sorted which drastically reduces its storage demand, and introduced the bit sorting algorithm. The second part covered how to process the upload in the browser using Web Workers. Now I'd continue from there, and explain where I store the files and how I offer downloads with reasonable costs.

Storage in the URL

Deep Atlantic Storage sorts the bits in every uploaded file. After sorting, each file can be represented by two numbers: the number of 0 bits, the number of 1 bits. Given these two numbers, the sorted file can be reconstructed.

I could make a database or use one of those fancy NoSQL thingy to store those numbers that represent the files, but I prefer my websites stateless so that I don't need to take backups. Therefore, I decided to encode those numbers in the URI.

在?吃过树上的知了么?

蝉,俗称知了,是一种昆虫。 每过17年,在美国东部地区,数十亿只【Brood X Cicadas】知了从地下钻出,开始交配繁殖。 只要有树木的地方,就可以听到知了大军日夜歌唱、没完没了,声强达到90分贝以上,令当地居民烦恼不已。

两只知了停在树枝上

知了虽吵,但对人畜无害,且具有丰富的蛋白质。 6月19日,阳光男孩前往附近小树林,捕捉知了,补充营养。 知了不会咬人,飞行速度很慢,只要被看见,就插翅难逃。 可是,这天已经接近【知了季】的尾声,所以阳光男孩在齐腰深的草丛里钻进钻出两个小时,也只捉到24只。

知了躲在草丛里

回来在水池里略作清洗,然后下油锅爆炒30秒,加少量盐调味。 透明的翅膀入口即化,金黄色的身体鲜嫩清爽,别有一番风味。

Deep Atlantic Storage: Reading File Upload in Web Workers

I'm bored on 4th of July holiday, so I made a wacky webpage: Deep Atlantic Storage. It is described as a free file storage service, where you can upload any file to be stored deep in the Atlantic Ocean, with no size limit and content restriction whatsoever. How does it work, and how can I afford to provide it?

This article is the second of a 3-part series that reveals the secrets behind Deep Atlantic Storage. The previous part introduced the algorithm I use to sort all the bits in a Uint8Array. Now I'd continue from there, and explain how the webpage accepts and processes file uploads.

File Upload

File upload has always been a part of HTML standard as long as I remembered:

<form action="upload.php" method="POST" enctype="multipart/form-data">
  <input type="file" name="file">
  <input type="submit" value="upload">
</form>