Why Ownly does not Work on the ndn6 Network? A Decade of Policy-Blind Routing

Stardate 1481.6, Antwerp. Three friends opened Ownly, the flagship NDN application developed and published by UCLA Internet Research Laboratory. They started typing into the decentralized collaborative editor, but one of them cannot see the edits. They checked the connection: 📶 online. They checked the prefix registration: ✅ successful. However, the document will not sync.

What worked seamlessly in a UCLA lab failed in the wild because of a missing feature in the routing protocol. This article moves beyond the #2856 confinement issue in the last episode and identifies additional gaps that prevent applications from working across autonomous system boundaries. In particular, we assume a relaxed prefix registration policy that allows the Ownly application to register its desired prefixes in both the global NDN testbed and the ndn6 network, and explore what other features are necessary in order to enable the sync-based communication patterns.

Inter-Domain Routing Built on Grep and Hope

I currently operate two global-scale networks: AS200690 and ndn6.

  • AS200690 is an IPv6-only network registered with RIPE NCC. I have eight routers connected with each other via WireGuard or GRE tunnels. Each router runs the BIRD Internet Routing Daemon in a KVM server.
  • ndn6 is an independent NDN network. I have six routers connected via UDP tunnels over IPv6. Each router runs the NDN Forwarding Daemon (NFD) and NDN Link State Routing (NLSR) in Docker containers.

Both of my networks are peering with other networks, but the setups are drastically different. AS200690 uses the Border Gateway Protocol (BGP), a standardized inter-domain routing protocol used by every ISP of the world. At Ontario Internet Exchange and other locations, I can establish a BGP session and exchange routes with another network. The ndn6 network, on the other hand, requires a custom hack.

The ndn6 network, similar to the global NDN testbed, relies on NLSR to distribute prefix reachability information. However, like OSPF in the IP world, NLSR (in link-state mode) is strictly an intra-domain routing protocol. Its design assumes a single administrative domain where every router completely trusts each other, and every node builds an identical topology map of the entire network. It does not have any concept of autonomous system (AS) boundaries, peering relationships, or transit policies.

While it is possible to interconnect the NLSR instances across my ndn6 network and the global NDN testbed, doing so would mean that the "two networks" are effectively merged into one. It would no longer be possible for either administrator to enforce routing policies, such as adding a black hole, without affecting the other "half-network".

To bridge this architectural gap and connect the ndn6 network to the global NDN testbed while maintaining its independence, I had to develop an out-of-band workaround. This is implemented in the ndn6-register-prefix-remote tool, which is part of ndn6-tools.

The mechanism relies on a fragile pipeline of "grep and hope":

  1. Once a minute, the program retrieves Name LSAs from the /localhost/lsdb/names dataset published by NLSR.
  2. It looks for prefixes that start with my network name /yoursunny.
  3. Whenever it discovers a valid prefix, it generates a prefix registration command and sends it to the testbed router over /localhop/nfd/rib/register.

Unidirectional Export is Un-Sync-able

While I was able to successfully announce my own services to the world, the ndn6-register-prefix-remote tool is a one-way hack. It functions only because it is strictly a unidirectional export from ndn6 to testbed and is restricted to only the /yoursunny namespace (including routes starting with this prefix, such as /yoursunny/pushups). As we will see in the next section, trying to make it bidirectional or loosen the prefix filter would cause a network meltdown.

State Vector Sync (SVS), which is heavily used in the Ownly application, would not work properly with the current unidirectional export setup. The SVS protocol distinguishes between two types of Interests:

  • A Sync Interest is sent by a node whenever it generates a publication. It is multicast to every node in the group. It serves as a notification and is never answered.
  • A Data Interest is used for retrieving a publication from a specific node. It is unicast toward the node who published the data. It could be answered by the producer node itself, or an in-network cache or repository.

Ownly, according to a live packet capture and the NLSR status page, is using these names and prefixes:

  • Sync Interest:
    • example name: /ndn/multicast/ndn/ownly.named-data.net/weekly-calls/root/32=svs/v=3/params-sha256=7c81c7bde1af65c7952937deebb15e1f8bc72bd7b7402a85e42db31a69e33230
    • prefix registration: /ndn/multicast/ndn/ownly.named-data.net/weekly-calls/root/32=svs
    • forwarding strategy: multicast
  • Data Interest:
    • example name: /ndn/ownly.named-data.net/weekly-calls/root/ndn/edu/ucla/cs/adam/t=1775505323/seq=10/v=0/seg=0
    • prefix registration: /ndn/ownly.named-data.net/weekly-calls/root/ndn/edu/ucla/cs/adam/t=1775505323
    • forwarding strategy: best-route unicast

Notably, both prefixes start with the /ndn component that generally signifies "the global NDN testbed". This component is a hard-coded requirement in Ownly's trust schema.

If a Ownly client connects to a router in the ndn6 network, assuming a relaxed prefix registration policy, the router would accept these two prefix registrations and propagate the routes within the ndn6 network. However, since neither prefix starts with /yoursunny, the ndn6-register-prefix-remote tool would not export these prefixes into the global NDN testbed. This creates a split-brain situation.

Imagine three friends trying to collaborate on the same workspace. Through the randomness of the NDN-FCH service, Alice is connected to the global NDN testbed, while Bob and Carol are both connected to the ndn6 network.

Alice Bob Carol testbed ndn6

Because of how Longest Prefix Match (LPM) works in NDN forwarding tables, the SVS protocol instantly fractures:

  1. Within the ndn6 network, both Bob and Carol successfully register the Sync Prefix /ndn/multicast/…/32=svs.
  2. When Bob edits the document, his node sends a Sync Interest into the ndn6 network to notify others.
  3. To bridge the networks, I have a short /ndn static route configured on ndn6 routers pointing toward the testbed. However, when NFD processes the Sync Interest from Bob, the LPM lookup in the Forwarding Information Base (FIB) would first match the /ndn/multicast/…/32=svs prefix registration from Carol. Hence, the Sync Interest is forwarded to only Carol, while the /ndn static route is ignored.
  4. Consequently, the Sync Interest successfully reaches Carol but never crosses the boundary toward the testbed.
  5. Alice stares at a frozen screen, completely unaware of Bob's edits.

SVS breaks down across network boundaries, because a short, static inter-domain route will always lose to a longer, more specific application route registered locally.

Valley-Free Policy vs Bidirectional Import+Export

A casual observer may propose: remove your strict /yoursunny filter and switch to bidirectional import+export. With both networks explicitly aware of the whereabouts of the /ndn/multicast/…/32=svs Sync Prefix, LPM would no longer isolate the domains.

However, doing so in a multi-domain network deployment violates the Valley-Free Routing policy, a foundational structural rule of global network engineering. In the global Internet, autonomous systems maintain commercial and structural relationships categorized as Providers, Peers, and Customers. The valley-free rule dictates that a network can provide transit for another network only if it is structurally sound to do so:

  • A network may forward traffic between a Customer and a Peer/Provider.
  • A network must not forward traffic from one Provider to another Provider, from one Peer to another Peer, or between a Peer and a Provider. Doing so is called a valley-free violation and typically results in a BGP route leak that impacts the stability of the Internet.

If I start doing bidirectional import+export between the ndn6 network and the global NDN testbed, it would cause my network to inadvertently provide transit for the global NDN testbed. In the example topology below:

  1. A producer is connected to testbed router A and registers a prefix there.
  2. The prefix is imported into the ndn6 network by router B and propagated through ndn6's NLSR routing daemons.
  3. Because NLSR operates entirely on flat Link State Advertisements (LSAs), ndn6 router D has no way to know whether the prefix originated from the ndn6 network (i.e. it is a Customer route) or imported from the testbed (i.e. it is a Provider/Peer route). It would export this prefix back to the testbed.
  4. The testbed router B, seeing router D advertising a lower cost, would prefer this path over the internal B-A path and forward Interests toward router D in the ndn6 network, even if the producer is located inside the testbed. This is the textbook valley-free violation.
150 100 10 10 producer consumer valley-free route shortest route, valley-free violation A B testbed C D ndn6
# machine readable topology as depicted in the SVG
routers:
  A: { network: testbed, end_hosts: [producer] }
  B: { network: testbed, end_hosts: [consumer] }
  C: { network: ndn6, end_hosts: [] }
  D: { network: ndn6, end_hosts: [] }
edges:
  A-B: { cost: 150 }
  A-C: { cost: 100 }
  B-D: { cost: 10 }
  C-D: { cost: 10 }
paths:
  A-B: { label: "valley-free route" }
  A-C-D-B: { label: "shortest route, valley-free violation" }

Policy Tagging Needed in Routing Protocols

This operational difficulty exposes a significant gap in today's NDN routing protocols. The valley-free policy is impossible to enforce in NDN, because routing protocols like NLSR and ndn-dv are completely policy-blind. NLSR's NameLSA object carries the name prefix, sequence number, expiration time, together with a cryptographic signature of the router that published it. However, it features zero fields for path attributes, transit marking, or community tagging. Without these policy "knobs", an independent network operator such as myself cannot safely peer with another network. I had to maintain the strict /yoursunny filter and unidirectional export-only readvertisement, because this is the only way to prevent routing loops and transit leaks.

In my IPv6 network, AS200690, I handle this exact problem seamlessly using BGP Large Communities. Through a configuration generated by Pathvector for the BIRD Internet Routing Daemon, each IPv6 route entering my network is tagged with a Large Community value that indicates whether it is a Provider, Peer, or Customer route. These tags are embedded in the BGP routing updates and propagated together with the IPv6 prefix and routing path information. I can then write an export policy that only accepts routes tagged "Customer" and rejects Provider/Peer routes, upholding the valley-free policy.

In order for NDN applications to work seamlessly across autonomous domain boundaries:

  • The applications should stop using "skeleton keys" to perform prefix registrations, see previous episode.
  • The routing protocols should include a name-based policy tagging mechanism similar to BGP Large Communities, so that a bridging tool such as ndn6-register-prefix-remote could read these tags and decide whether to readvertise the prefix.

These improvements would enable proper network federation. Ownly and other NDN applications would be able to, once again, utilize nodes from both the testbed and the ndn6 network.