Writing and publishing academic papers is an essential part of PhD education. During my 6-year PhD career, I published three academic papers as first author in peer-reviewed conferences:
- Junxiao Shi, Beichuan Zhang, Making Inter-domain Routing Power-aware?, ICNC 2014
- Junxiao Shi, Teng Liang, Hao Wu, Bin Liu, Beichuan Zhang, NDN-NIC: Name-based Filtering on Network Interface Card, ICN 2016
- Junxiao Shi, Eric Newberry, Beichuan Zhang, On Broadcast-based Self-Learning in Named Data Networking, IFIP Networking 2017
Publishing an academic paper is hard. In the process, I must:
- Come up with an idea.
- Confirm the idea is feasible.
- Design and execute experiments to show the design is superior to competitors.
- Write the paper to make others understand my idea and experiments.
- Submit the paper, and hope my paper is better than most submissions in the same conference.
The Idea Phase
A great idea starts from a well-defined problem. I found the problems I would tackle from the forefront of current research, by reading papers published by others, and participating in discussions on mailing lists.
The first research topic I worked on, energy management for network infrastructure, is a rich field with a long history. I spent an entire semester reading existing publications. My PhD advisor, Dr Beichuan Zhang, taught me to be critical when reading: "Assume they contain errors. Try to find them." I indeed find shortcomings of the presented designs, and see how later papers are amending those shortcomings.
My second research topic, Named Data Networking (NDN), is a newer field. Papers are relatively easy to find, because there are only a handful of conferences specialized in this topic. Nevertheless, I feel that I did not read enough, as I missed many papers published in lesser-known places.
When I'm able to find a "crack" in existing publications, it could be a research problem to tackle. Then I just need to come up with an idea to solve the problem. As an example, the NDN-NIC paper started with the problem "every broadcast NDN packet is processed in a software network stack which incurs CPU overhead", and the idea was "filter the packets in the hardware network interface card (NIC)" to reduce the overhead.
Finding the initial idea is simple, but confirming its feasibility requires careful investigation. Many ideas sound nice, but are simply infeasible. As an analogy, you could have the idea of reaching the moon with an elevator, but physics would not allow you to build such an elevator. Likewise, building an NDN-NIC that contains a packet filtering logic with 100% accuracy would need a large amount of memory in the NIC, which is prohibitively expensive, and thus it is not a feasible idea.
I revised my idea to filter the packet with Bloom filters, so that less memory would be required in the NIC, at the expense of decreased accuracy. When I presented this idea, my advisor was not convinced: he wanted numbers to show the extent to which accuracy is decreased; if my filter is only 50% accuracy, it would not be a convincing design. Fortunately, number crunching indicated that my idea probably would work, and I got the green light to start experimenting.
Being a strong programmer, I had little trouble in doing experiments. Starting from the idea, I can define the network protocol and design the processing procedures, implement them in a program, and experiments would come to life. However, this is easier said than done.
There is a trade-off between timeliness and program quality. My advisor complained that I was spending too much time on programming, and there was no "numbers" coming out for weeks. It is desirable to get some experiment results quickly, so that flaws in the design could be spotted earlier. To get the results faster, I sometimes took shortcuts in software architecture and implementation.
However, using "dirty hacks" lowers the quality of the program and makes it hard to maintain. While long-term maintenance is unimportant for experimentation code, having the flexibility for design changes is essential. In multiple occasions, I found it difficult to make certain changes to the design, because the program architecture may not be compatible with such changes. For example, NDN-NIC was initially designed to use only one Bloom filter to filter all packets. After seeing the bad results, I changed the design to use three Bloom filters for different packet types. I had to redefine the file formats, and perform a major refactoring in the simulator.
Sometimes it could get even worse: results are weird, and I don't know why. In one version of broadcast-based NDN self-learning simulator, the algorithm was incredibly complicated and had a nondeterministic random factor, and the program was full of dirty hacks. Experiment results were mixed: it worked well in some scenarios, and performed badly in other scenarios. I spent weeks reading the logs and attempting to fix the program, but the outcome was still the same. After wasting a whole semester, I abandoned that algorithm and simulator. To this day, I still do not know whether the algorithm design was wrong, or it was just an error in the implementation.
If I would do it again, I am going to prefer program quality: I'll have a good architecture, minimize the use of hacks, and implement unit tests for most important logic. Otherwise, the shortcuts I take would eventually come back and bite me.
A computer network is a distributed system, where each node operates independently and communicates with other nodes through packets. Therefore, I started most implementations with a program that represents a network node. To run an experiment, I would fire up multiple virtual machines connected as a certain topology, run an instance of the program in each virtual machine, and observe how the network behaves.
It turns out that this is not the best way. In every paper I wrote, some metrics need to be collected. For example, the self-learning paper reports the number of packet transmissions to complete a file retrieval communication. Having a "network node" program would not provide this metric. I had to parse the logs to derive the metric, but it would have been better if the program has a counter to provide the metric directly.
Moreover, an algorithm design usually contains some adjustable parameters, and the experiments should show the effect of adjusting each parameter. I used to include those parameters as compile-time constants in the code. To experiment with a certain parameter setting, I would have to modify the code, recompile the program, run the experiment, and parse the logs to get one data point. If there were two parameters each with seven different values, I would repeat 49 times to generate all the data points. In case I change the design and implementation, it was the same tedious process all over again.
To make parameterized experiments more efficient, I made an "experiment controller". First, I coded the program to read all parameters from command line arguments or environment variables. Second, I tabulated all combinations of parameter settings I wanted to test into a file. Third, I scripted the log parsing steps, so that the data point would show up right after an experiment. Fourth, I wrote a "controller" script to read the parameter settings from the table file, pass the parameters to the main program, and invoke the log parsing script. Furthermore, this controller script would avoid redoing an experiment if results from the same parameters were already available; it could also parallelize the experiments, if permitted by the main program.
My productivity increased significantly since I started doing experiments with a controller. It allowed me to prepare a large parameter table and have the computer run the experiments one after another, while I am sleeping or riding a bike.
The experiment controller also helped tremendously in improving the reproducibility of my experiments. In the old way, the code must be slightly modified for each parameter setting, and the modifications are not committed into source control. In the new way, the main program, the controller script, and the parameter table are all committed into source control, so that an experiment could be replicated with exactly same settings.
Do Experiment More Than Once
One major mistake I made in all my papers was doing each experiment only once. A design should be tested with multiple inputs, not just one input. Moreover, in computer networking, packet timing could differ based on random factors.
Dr Hartman discovered this mistake when I later compiled my research into a dissertation. Following his advice, I repeated all the experiments, and discovered several conclusions were questionable. These observations were merely coincidences in my initial experiments, and I mistakenly took them as conclusions.
When I have done the experiment, if the results look good, the next step is to write the paper. As a non-native English speaker, writing has been my weakness.
A quick read of THE ELEMENTS OF STYLE (paid link) allowed me to grasp the mechanics of English writing, but this is just a start. More importantly, effective writing is readable: it must be clear, accurate, and concise.
My scientific writing was "mechanical": it usually looked like a straight translation of the program, and was difficult to understand. My logic was incontiguous: I often assumed the reader would be able to deduce certain inferences easily, and did not adequately describe the reasoning. Due to these two common mistakes, my submission was often rejected because the anonymous reviewers misunderstood my concepts.
While revising remains important, having classmates reviewing my draft helped a lot. They could point out what they could not understand, which alerts me of potential writing problems.
Submission and Scheduling
In computer networking field, academic papers are typically published in conferences rather than journals. There are top conferences such as SIGCOMM, INFOCOM, and NSDI, which are very difficult to get into; there are also lower-tier conferences with higher acceptance ratio. Each conference has a "submission deadline", and a paper would not be considered if the deadline is missed.
Once a conference is selected, all work must be scheduled to meet the deadline. Since writing is my weakness, I would schedule at least four weeks for writing; in other words, most experiment results need to be ready four weeks before the submission deadline. Then, a week-by-week plan could be scheduled for the experiments.
An on-time submission not only depends on the timely completion of the experiments, but also requires the experiments to produce meaningful and good results. This is the biggest risk: if the results are bad, I would have to go back to modify the design, modify the program, and redo the experiments; it is a loop. An advisor at Writing Skills Improvement Program suggested that I could include this loop in my schedule, and the schedule would tell me how many times I could repeat this loop without missing the deadline. However, there is no way to mitigate the risk and limit the number of loops. If the results continued to be bad, I would end up missing the submission deadline. I guess this is just part of the academic paper process.
If everything went well, I would be able to submit the paper. Over the next few months, the anonymous reviewers would read my submission. If my work was great and I have been lucky, the paper would be accepted. Otherwise:
we regret to inform you that your paper was not accepted for inclusion in the conference program.
I would cry a bit when seeing this.
Travel and Presentation
One of the perks of having a paper accepted is that I can travel to the conference location, and present my paper to the world. However, as an international student, I need a visa to travel to most countries and to return to the United States, which limited where I could go. I was only able to attend only one conference, ICNC 2014, held in Hawaii.
My regular taxi driver did not show up, but I got to the airport safely on time. Hawaii is a paradise: beach, sunshine, blonde girls, everything.
My only obligation was a 20-minute presentation of my paper. There were four people in the audience, each needed to present their own paper. Nobody paid attention to my slides or my presentation. The session chair felt obligated to ask me one question, which wasn't even relevant.
Other than this presentation, I attended a few sessions of the conference to hear some presentations whose titles interested me. The rest of the time was spent swimming in the ocean and seeking geocaches.
Academically, this was not a rewarding experience. Non-academically, who doesn't want a free trip?
This is my personal experience about planning, writing, publishing, and presenting academic papers in computer networking conferences. I may not be the best person writing a paper, but I hope my lessons in doing experiments would be helpful for others.