NDN Video Streaming on the ndn6 Network

The ndn6 network, my own global scale Named Data Networking network, came back earlier this year. I moved my NDNts video streaming app into the ndn6 network, to reduce dependency on the NDN testbed. How well is it performing?

QUIC ⇒ HTTP/3

In my last article "NDN video streaming over QUIC", I used Chrome browser's experimental QuicTransport feature to perform video streaming over Named Data Networking. The analysis revealed that QUIC transport was generally performing better than WebSockets in this application, according to metrics including video resolution and startup latency.

Web technologies are constantly evolving. QuicTransport was in Origin Trial status at the time, but it was discontinued as of Chrome 91. WebTransport was introduced in its place. The main difference is that, WebTransport uses HTTP/3 as the underlying network protocol, while QuicTransport uses QUIC datagrams.

Since HTTP/3 runs over QUIC, I expect no performance difference between the two. I promptly registered for the WebTransport Origin Trial, and updated my gateways and NDNts libraries to use the new API.

Streaming push-ups on the ndn6 Network

A bigger change is moving the video producers to the ndn6 network. The ndn6 network, as of this writing, has 9 routers in 3 continents:

  • North America
    • LAX Los Angeles, California: VirMach (IPv6 via TunnelBroker; no public IPv6)
    • DAL Dallas, Texas: Nexril
    • MIA Miami, Florida: Hostodo
    • BUF Buffalo, New York: VirMach (IPv6 via SIT tunnels to neighbors; no public IPv6)
  • Europe
    • LIL Roubaix, France: Evolution Host
    • MUC Munich, Germany: Webhosting24 (serving WebSockets over IPv6 only; no HTTP/3)
    • WAW Warsaw, Poland: WebHorizon
  • Asia
    • SIN Singapore: Green Cloud VPS
    • NRT Tokyo, Japan: Oracle Cloud (Internet bandwidth is limited to 50Mbps; serving both WebSockets and HTTP/3)

Every router has links to 2~4 other nodes, all operating over IPv6. They are connected in a topology like this:

LAX DAL MIA BUF LIL MUC WAW SIN NRT

NLSR routing software is installed on all routers, operating in link state mode. The link cost is specified as the RTT in milliseconds as measured by ping -6 command. The ASF forwarding strategy, as recommended by NLSR developers, are used for content prefixes.

Video producer of the NDN push-ups site has two replicas, deployed at DAL and WAW. My other site NDNts video demo with educational content, has one producer deployed at LAX. Each producer advertises its precise prefix(es) within the ndn6 network. However, these prefixes are not readvertised into the global NDN testbed; instead, 6 of my nodes are announcing the same prefix /yoursunny into the testbed.

Gateway selection is greatly improved. During the previous experiment, one of three NDN-QUIC gateway is statically assigned according to the visitor's country code. This time, I rewrote NDN-FCH during an NDN hackathon, so that there is distance-based selection for both WebSockets and HTTP/3. Moreover, the new NDN-FCH service periodically checks the availability of each router over each transport protocol.

The video player web application asks NDN-FCH for up to 4 nearest NDN routers under each transport protocol that passed the recent health check, which could come from either the global NDN testbed or the ndn6 network. NDNts library then measures RTT to these routers and selects the fastest one; this RTT test uses a generic prefix (/localhop/nfd/rib/list that is answered by every NFD), not specific to the video content.

Viewer Locations and Counts

I collected statistics between 2021-07-01 and 2021-08-15. During this period, my beacon server received 38937 log entries, representing 197 sessions and 233 minutes of video playback. The number of video playback sessions from each continent is presented in the table below:

user continent HTTP/3 sessions WebSocket sessions failed sessions
Africa (AF) 0 2 0
Antarctica (AN) 0 0 0
Asia (AS) 13 59 7
Europe (EU) 21 39 0
North America (NA) 16 35 1
Oceania (OC) 3 2 0
South America (SA) 0 0 0

Notably, the proportion of Asian viewers have increased, compared to previous statistics. It was a good decision to have two routers in Asia.

Video Experience: Not Good

The following chart shows the video resolution experienced by viewers in each continent, counted separately for WebSockets and HTTP/3.

The next chart shows the startup latency of each video, i.e. the duration between viewer clicking the play button and the video starting to play.

In Europe and Asia, HTTP/3 is performing worse than WebSockets for both metrics. In North America, the benefit of HTTP/3 is dwindling: startup latency is lower, but video resolutions are more or less the same.

Crummy Network, or?

I wonder, why is video resolution becoming worse when I moved the producers into the ndn6 network? I'd like to look at playback sessions connected over WebSockets and HTTP/3 separately, in comparison with the previous experiment.

The first chart shows playback sessions over WebSockets.

  • UDP refers to the previous setup:

    • Video producers are directly attached to the global NDN testbed over UDP.
    • Viewers connect to the testbed over WebSockets.
  • ndn6 refers to the current setup:

    • Video producers are running inside the ndn6 network, connecting to local NFD over Unix socket.
    • Since NDN-FCH response can include routers from both networks and the majority of WebSocket routers are on the global NDN testbed, viewers still connect to the testbed.

The second chart shows playback sessions over either QUIC or HTTP/3.

  • QUIC refers to the previous setup:

    • Browsers connect to the gateway using QuicTransport API.
    • NDN-QUIC gateway connects to local NFD.
    • NFD can cache up to 98304 packets, mostly dedicated to video content.
    • NFD connects to one nearby testbed router and forwards all Interests there.
  • HTTP/3 refers to the current setup:

    • Browsers connect to the gateway using WebTransport API.
    • NDN-QUIC gateway connects to local NFD.
    • NFD can cache up to 48000 packets (except LIL having a capacity of 90000 packets), shared among several applications.
    • NFD forwards Interests within the ndn6 network, since the producers are inside my network.

From what I can see, video resolution remains the same when retrieved over WebSockets, but ndn6 + HTTP/3 is significantly worse than NDN testbed + QUIC.

Do I Have Enough Caching?

If we can trust that the QuicTransport vs WebTransport difference does not change performance, a strong possibility causing this behavior is the reduced caching capacity. In the current HTTP/3 setup, comparing to previous QUIC setup:

  • Local NFD caching capacity is reduced by half.
  • Packets created by NLSR and Docker registry are competing for the limited caching.
  • Many routers in the global NDN testbed has larger caches, but Interests would not be forwarded there.

To confirm this hypothesis, let's compare video delivery from each ndn6 router:

  • DAL and WAW each hosts a replica of the push-ups video repository; LAX hosts the educational video repository. (Note: the NDN-DPDK ICN2020 video is stored in the push-ups video repository, and is counted as such.) Playback sessions connected to these routers should perform better for locally hosted videos.
  • LIL has higher caching capacity than other routers. Playback sessions connected to LIL should perform better than others.

The above chart shows video resolutions served from ndn6 routers. A router is included only if it accrued at least 5 minutes of playback. The results are mixed:

  • DAL, which hosts a push-ups video repository, indeed delivered high quality videos.
  • LIL, which has a larger cache, also delivered high quality videos.
  • Not having enough playback minutes, WAW is excluded from this comparison, although it likely benefited LIL that is only 28ms away.
  • LAX, which hosts the educational video repository, is doling out 240p potato quality videos most of the time. It's a VirMach server with older and slower hardware, but the effect of that should not be that much. Moreover, the same server served North America over QUIC during the previous experiment, and it was performing very well.
  • Curiously, NRT was used over WebSockets quite a lot, but nobody connected to it over HTTP/3.

There isn't conclusive evidence that reduced caching is negatively impacting video playback quality, but that is still my primary suspect. Most of my (incredibly cheap) servers have only 1GB RAM, not a lot of room for caching, but I'd see what I can do.

Conclusion

This article describes my recent NDN video streaming experiments on the "NDN push-ups" website, especially after I moved the producers into my very own ndn6 network. Using real world data collected during July and August 2021, I analyzed quality of experience metrics such as video resolution and startup latency, and found that the performance worsened since the previous deployment using the global NDN testbed. I guessed that reduced caching capacity on my routers could be the reason of reduced video resolution, but was unable to find conclusive evidence. I will, of course, keep digging and keep improving my app, and will report back in a few months when I find something.

Although this is not a scientific publication, raw data and scripts in this article are available as a GitHub Gist. If you find this article interesting, please do a few push-ups in my honor, cheers!