What is a "Face" in Named Data Networking?

Face is an overloaded term in Named Data Networking (NDN). Most developers have some feeling of what a face is, but often find it hard to explain. This article attempts to demystify the concept of face in NDN.

"Face" as Defined in Publications

The original NDN paper, Networking Named Content, introduces the term face in a footnote:

We use the term face rather than interface because packets are not only forwarded over hardware network interfaces but also exchanged directly with application processes within a machine.

NFD, the original NDN forwarder software, explains in the NFD Developer Guide:

NFD can communicate on not only physical network interfaces, but also on a variety of other communication channels, such as overlay tunnels over TCP and UDP. Therefore, we generalize "network interface" as "face", which abstracts a communication channel that NFD can use for packet forwarding. The face abstraction (nfd::Face class) provides a best-effort delivery service for NDN network layer packets.

RFC 8793: Information-Centric Networking (ICN): Content-Centric Networking (CCNx) and Named Data Networking (NDN) Terminology reflects the same view in more detail:

ICN Interface: A generalization of the network interface that can represent a physical network interface (ethernet, Wi-Fi, bluetooth adapter, etc.), an overlay inter-node channel (IP/UDP tunnel, etc.), or an intra-node inter-process communication (IPC) channel to an application (unix socket, shared memory, intents, etc.).

Common aliases include: face.

ICN Consumer: An ICN entity that requests Data packets by generating and sending out Interest packets towards local (using intra-node interfaces) or remote (using inter-node interfaces) ICN Forwarders.

Data Forwarding: A process of forwarding the incoming Data packet to the interface(s) recorded in the corresponding PIT entry (entries) and removing the corresponding PIT entry (entries).

All three publications associate face with two properties:

  • A face delivers NDN network layer packets, such as Interest and Data.
  • A face is either an inter-node communication channel that sends packets to other nodes, or an intra-node communication channel that sends packets to another process on the same node.

Comparing to an IP interface:

  • The first property is similar: an IP interface delivers IP datagrams.
  • The second property is different:
    • An IP interface is generally an inter-node communication channel, except the loopback interface.
    • Intra-node communication is typically handled by the local network stack, which uses sockets.

Inter-Node Face - Interface vs Adjacency

According to RFC 8793, an inter-node face could be either a physical network interface (ethernet, Wi-Fi, bluetooth adapter, etc.) or an overlay inter-node channel (IP/UDP tunnel, etc.). In practice, the latter category is often generalized as adjacency to allow non-overlay protocols, such as an Ethernet unicast channel.

NDN implementations can choose to implement interface, adjacency, or both categories:

In many cases, interfaces and adjacencies operate independently, and the forwarding plane does not need to distinguish between them.

What's the Remote Address?

When you transmit a packet on an Ethernet adapter, you must specify the destination address. Suppose the Ethernet adapter is a face on an NDN node, what destination address do you use?

My answer to this question used to be: just use the NDN multicast address!

The NDN multicast address refers to 01:00:5e:00:17:aa, which is an Ethernet multicast group that every NDN node is expected to join. Sending a packet to the NDN multicast address effectively broadcasts the packet to every NDN node within the Ethernet subnet.


A common criticism to this broadcasting approach is its processing overhead. In a simple topology of three NDN nodes and an Ethernet switch, node C is retrieving a file from P. Under the broadcasting approach, every packet would reach every NDN node, including Q that neither wants nor provides the content. It is unnecessarily consuming CPU power on Q.

My dissertation has an answer to this: install NDN-NIC that can filter packets by name in hardware, so that Q could drop those packets in the Ethernet adapter, and does not need to process them in CPU. I managed to build a simulator for NDN-NIC, but so far nobody has realized NDN-NIC in actual hardware.

My other answer was: install an NDN forwarder on the switch! In that case, the switch could intercept all NDN communication regardless of Ethernet destination address. When it sees the communication is between C and P, it can avoid sending packets to Q. I made it happen on OpenWrt home routers, but I haven't been able to figure out how to do that on the 100 Mbps switch that I had since 2006.

Wireless Multicast

Broadcasting on wired Ethernet has some CPU overhead, but does not otherwise affect network performance. On wireless networks, it's much worse. WiFi multicast is unreliable, slow, prone to interference, and power-hungry.

A recent ICN-SRA 2020 publication, Enabling Named Data Networking Forwarder to Work Out-of-the-box at Edge Networks, proposes a procedure like this:

  1. C sends one Interest to the NDN multicast address, asking for the first segment of a file.
  2. P replies to this Interest with a Data packet.
  3. Now that C knows P has the file, it sends subsequent Interests over unicast to P to retrieve the remaining segments.

By switching to unicast for most of the file transfer, this approach minimizes CPU overhead on Q, and mitigates the limitations of WiFi multicast. However, when it comes to implementation, face issues arise.

In 2019, developers attempted to implement this procedure in NFD using only one face: the physical Ethernet adapter. Each packet meant to be sent over unicast is to be annotated with the unicast address, which has been generalized as EndpointId in NFD and already used in the NDNLP fragmentation and reassembly implementation.

This direction appeared straightforward at first, but soon faced obstacles when EndpointId bled deeper into the forwarding plane and required more and more changes in the forwarding logic and data structures.

In 2020, another implementation attempt was made using two faces: the physical Ethernet adapter for broadcast, and an adjacency face between C and P for unicast. Forwarding logic should extract the unicast address of P in the first Data packet, dynamically create an adjacency face and setup a route, and then send subsequent Interests on this new face. This would not fully insulate the forwarding plane from understanding EndpointId, but it would be contained within the logic that perform unicast face creation, and hopefully does not need to change everything else.

This direction met challenges since the design stage. Generally, face creation and route updates are assumed to be infrequent, but dynamic adjacency creation violates this assumption. NFD's architecture could allow the forwarding plane to create adjacency faces. On the other hand, performing a route update requires sending a command packet to the control plane that is running on a separate thread, but it isn't possible to send a new packet while the forwarding plane is busy processing the Data packet received via multicast face. Thus, this attempt never got past the design stage.

"Face" in Libraries

The last category in RFC 8793 is: an intra-node inter-process communication (IPC) channel to an application (unix socket, shared memory, intents, etc.) This is, obviously, from a forwarder's point of view:

  • NFD has a Unix socket listener that spawns a face for each incoming connection.
  • NDN-DPDK supports shared memory faces based on libmemif.

This category is sometimes generalized to include intra-node intra-process communication channels as well. For example, NFD has an internal face that provides a communication channel to NFD's control plane.

On the application side, what do we call the component that connects to the IPC face on the forwarding plane? The answer is, in most libraries: "face". It's important to understand that a so-called "face" in the application library is completely different from a face in the forwarder.

Both library and forwarder IPC faces can send and receive NDN network layer packets, but this is where the commonality ends. Library faces generally have these additional services:

  • match incoming Data against outgoing Interests
  • keep track of Interest timeouts
  • perform prefix registrations
  • dispatch incoming Interests to producer callback functions

None of these appear in the RFC 8793 face definition.

"Face": Not the Right Abstraction

Looking at how applications are using the library "face", we can see that the bare-bones services provided by a "face" is insufficient for most applications.

A library "face" represents a connection to a single forwarder, such as the local NFD. If the forwarder is restarted, the "face" cannot gracefully handle this situation and reconnect to the forwarder. Instead, it throws an error into the application, and most applications would crash with an error like:

ERROR: error while receiving data from socket (End of file)

Another problem is that, when the application sends an Interest, it typically wants to receive the Data and have its signature validated, but the library lacks the facility to do so. I've seen far too many applications with code similar to this:

void retrieveData()
  Interest interest(name);
    [=] (const Data& data) { // Data arrival
        [=] { useData(data); }, // validation success
        [=] { reportError(); }  // validation failure
    [=] () { // timeout
      if (++nRetries <= 2) retrieveData();
      else reportError();

Repeating this code everywhere not only complicates application code, but also introduces suboptimal behavior. For example, the retransmission logic in the above snippet is incorrect: it sends the retransmitted Interest after the previous Interest has timed out, and many forwarding strategies would not be able to identify the Interest as a retransmission because the PIT entry has been removed.

When I discussed this problem with ndn-cxx designers, I'm told to use the SegmentFetcher, which implements a congestion control algorithm for fetching segmented objects (such as a file), including the ability to perform retransmissions. However, not every application Data packet is part of a segmented object. Even if the SegmentFetcher could be modified to accommodate packets that do not have a segment number in the name, it would be a cumbersome API with too much unnecessary complexity.

Endpoint in NDNts

When I'm designing NDNts, I wanted to explore some new API designs, instead of copying problematic API designs from existing libraries. One of the design decisions I made is to not have a Face. Instead, it is split to three pieces: L3Face, Forwarder, and Endpoint. You may watch my presentation video at NDNts demo at NDN Community Meeting 2020 for a brief introduction of this approach.

app module module endpoint endpoint prefix registration packet demultiplexer (forwarder) l3face l3face transport transport

The transport sits at the bottom layer. NDNts has many transport implementations, including Unix sockets, UDP, WebSockets, QUIC, and Web Bluetooth. When used for inter-node communication, they generally have adjacency semantics.

The L3Face type is a network layer face, conceptually similar to a forwarder's face as defined in RFC 8793. To the upper layer, it can send and receive network layer packets. To the lower layer, it offers fragmentation and reassembly functionality.

The Endpoint type is where an application can express an Interest or become a producer under a name prefix. It has a consume function that sends an Interest, with options to enable automatic retransmission and signature verification. This allows the retrieveData example to be simplified as:

async function retrieveData(name: Name)
  const interest = new Interest(name);
  try {
    const data = await endpoint.consume(interest, { retx: 2 });
  } catch {

The Endpoint type also has a produce function for registering a producer callback, with options to enable automatic Data signing and buffering.

The packet demultiplexer (implemented as the Forwarder type) is a unique piece in NDNts. It is the stripped down version of a forwarder, complete with simplified versions of FIB and PIT, but does not have Content Store or forwarding strategies. A "face" in this forwarder (implemented as the FwFace type) is duplex stream of NDN network layer packets, which could be:

  • an L3Face that wraps a transport
  • a producer callback created by endpoint.produce function
  • a pending Interest created by endpoint.consume function

You may be surprised to learn that every single pending Interest would generate a new FwFace, but rest assured that creating an FwFace is as simple as inserting a few hash table entries, so that it is a cheap operation.

Variations of the packet demultiplexer exist in other libraries, usually as "implementation detail" of the Face type. For example, ndn-cxx has a FaceImpl type that contains a PIT and an "Interest filter table" (i.e. simplified FIB). Unlike other libraries, NDNts is the only library that allows multiple transports to be attached to the packet demultiplexer. This in turn enables NDNts to automatically handle transport errors and even reconnect itself to a different remote forwarder, without manual handling from the application logic.

Prefix registration functionality is hooked onto the packet demultiplexer. This allows NDNts to become agnostic to the forwarder's management protocol. NDNts is able to perform prefix registrations in both NFD and NDN-DPDK forwarder, while the application logic stays the same. Moreover, when an underlying L3Face is reconnected, the library can resend prefix registration commands automatically, allowing the application instance to transparently move between network attachment points.

Final Words

In this article, I attempt to demystify what is a face in NDN forwarders and NDN libraries. Then, I explain two potential semantics of an inter-node face in the NDN forwarder, interface and adjacency, and point out how their differences and relations impact forwarder design. After that, I describe what is a "face" in NDN libraries such as ndn-cxx, and why it is fundamentally different from a face in NDN forwarders. Finally, I introduce the endpoint design in my NDNts library.

An honorable mention goes to the NDNApp type in python-ndn library. It has an improvement over a basic library "face" in that it can automatically validate packet signatures. This is one step on the right direction.

I did not cover the Face type in my other library, NDNph. In short, it is similar to an adjacency face of a forwarder. However, its design deserves a separate article that I will write in the future.