Major U.S. airports are critical nodes in the air transportation network, providing the interface between ground and air transportation. Airports are geographic monopolies with multiple stakeholders. Government regulations require them to operate as public utilities under profit-neutral financial conditions. By their nature, the airport stakeholders have different and sometimes conflicting performance objectives.
Since U.S. airports operate under profit-neutral regulations, enterprise performance cannot be measured using traditional financial objectives and must instead be evaluated based on the airports’ ability to meet the objectives of all of their stakeholders. Comparative benchmarking is used for evaluating the relative performance of airports.
An analysis of past benchmarks of airport performance described in this dissertation shows that these benchmarks are ambiguous about which stakeholders’ needs they address and provide limited motivation for why particular performance metrics were used. Furthermore, benchmarks of airport performance use data of multiple dimensions, and such benchmarking without knowledge of utility functions requires the use of multi-objective comparison models such as Data Envelopment Analysis (DEA). Published benchmarks have used different DEA model variations with limited explanation of why the models were selected. The choices of performance metrics and the choice of DEA model have an impact on the benchmark results. The limited motivation for metrics and model render the published benchmark results inconclusive.
This dissertation describes a systematic method for airport benchmarking to address the issues described above. The method can be decomposed into three phases. The first phase is the benchmark design, in which the stakeholder goals and DEA model are selected. The selection of stakeholder goals is enabled by a model of airport stakeholders, their relationships, and their performance objectives for the airport. The DEA model is selected using a framework and heuristics for systematically making DEA model choices in an airport benchmark.
The second phase is the implementation of the benchmark, in which the benchmark data is collected and benchmark scores are computed. Benchmark scores are computed using the implementation of DEA models provided in the dissertation. In the third phase, the results are analyzed to identify factors which contribute toward strong performance and poor performance, respectively, and to provide recommendations to decision- and policy-makers.
The benchmark method was applied in three case studies of U.S. airports:
The first case study provided a benchmark of the level of domestic passenger air service to U.S. metropolitan areas. The frequency of service to hub airports and the number of non-hub destinations served were measured in relation to the size of the regional economy and population. The results of this benchmark showed that seven of 29 metropolitan areas have the highest levels of air service. Nine areas, including Portland, OR, San Diego, and Pittsburgh, have poor levels of air service. Contributing factors to poor levels of air service are the lack of airline hub service, limited airport capacity, and low airline yields.
In the second case study, a benchmark of the degree of airport capacity utilization was conducted. The degree of capacity utilization at 35 major U.S. airports was evaluated as defined by the level of air service and volume of passengers carried in relation to the airport runway capacity. Seven out of 35 airports have the highest levels of capacity utilization while six airports, including HNL, PDX, and PIT, have poor levels of capacity utilization. Some airports with high levels of airport capacity utilization incur large delay costs while the airports with poor levels of utilization have excess capacity, indicating that funding for capacity improvements should be directed away from the poorly performing airports to those that are capacity constrained.
The third case study recreated of an existing widely published benchmark. This analysis took the premise of a previously conducted benchmark that measured airport efficiency and recreated it by applying the new benchmarking methodology in two new component benchmarks: (1) A benchmark focused on the airports’ operating efficiency, using parameters which included the number of passengers and aircraft movements in relation to runway capacity and delay levels; (2) A benchmark comparing the level of investment quality of the airports, using factors such as the debt service coverage ratio, the portion of origin and destination passengers, and the levels of non-aeronautical revenues.
The results of the new benchmark showed no statistically significant correlation with the results of the original benchmark, leading to a different set of conclusions from the new benchmarks. This illustrates the importance of a comprehensive and systematic approach to the design of a benchmark.
Practical implications of the analysis for policymakers relate to the allocation of funding for capacity improvement projects. Airports in some areas operate at high levels of capacity utilization and provide high levels of air service for their regions. These areas are at risk of not being able to satisfy continued growth in air travel demand, limiting the potential for the areas’ future economic development. The most strongly affected area in this category is New York City. Similarly, the analysis found areas where the current level of air service is limited due to airport capacity constraints, including Philadelphia and San Diego. While airport capacity growth is subject to geographical and other restrictions in some of these areas, increased capacity improvement funds would provide a high return on investment in these regions.
In contrast, the analysis found that several airports with comparatively low levels of capacity utilization received funding for increased capacity in the form of new runway construction. These airports include Cleveland, Cincinnati, St. Louis, and Washington-Dulles.
In light of this indication that improvement funding is currently not optimally allocated, this benchmarking method could be used as a systematic, transparent means of enhancing the process of funding allocation.