The need to improve how we evaluate the Army’s contracting work and use that data is well documented; for example, the U.S. Government Accountability Office (GAO) issued a report in June 2017 titled “Army Contracting: Leadership Lacks Information Needed to Evaluate and Improve Operations.”
The problems aren’t the Army’s alone. In 2019, the Section 809 Panel—formally the Advisory Panel on Streamlining and Codifying Acquisition Regulations, established by Congress in the National Defense Authorization Act for Fiscal Year 2016—recommended to “Use existing defense business system open-data requirements to improve strategic decision making on acquisition and workforce issues.” The panel, composed of recognized experts in acquisition and procurement policy across the public (uniformed and civilian) and private sectors, added that “DOD lacks the expertise to effectively use [enterprise-wide acquisition and financial data] for strategic planning and to improve decision making.”
The overall lead time for defense acquisition is too long to keep up with great-power competitors and non-state actors. The Section 809 Panel, which completed its mission in July 2019 having published a three-volume final report over the previous 18 months, stressed this repeatedly. The panel recommended a “war footing” approach whereby “rapidly and effectively acquiring warfighting capability and delivering it to Service Members takes precedence over achieving other public policy objectives.”
For the contracting workforce, this means a focus on the pre-award phase of contracting. The Section 809 Panel recommends that we provide “… products, and services at a speed that is closer to real time than the current acquisition process allows.” At the same time, the Army is implementing new enterprise-resource planning software, in part to produce better data about what occurs during the contracting process. Our leaders want to know: “What does success in contracting look like?” and “How can we ensure we’re allocating resources to the right things?” These are old questions asked with new urgency.
CURRENT METRICS AREN’T GREAT
Measuring contracting tasks by requisite labor hours could be a good solution for some of the routine actions needed in contracting. For example, selecting clauses for an upcoming requirement should take a certain amount of time, which should be easy to determine based on the size and type of anticipated contract actions. Other examples include reviewing invoices, awarding commercial contracts below the simplified acquisition threshold, exercising preexisting options, incremental funding modifications and data input. Data input typically includes the use of enterprise resource planners for contract award and management; and other, more specialized systems such as the Synchronized Predeployment and Operational Tracker, Trusted Associate Sponsorship System and Joint Contingency Contracting System.
Army Contracting leadership currently tracks a subset of these routine contracting tasks for compliance on a go or no-go basis (i.e., whether the task has been completed for applicable contracts). These tasks include completing evaluations of contractor performance, contract closeouts, funding de-obligations, appointing a contracting officer representative and completing contract action reports.
The problem is that we struggle with even these compliance (go or no-go) metrics! These should be easy to collect, but we still get bogged down frequently with new or updated enterprise resource planners and determining who owns what actions. We’re a long way off from a usable labor-hour model.
In addition, with the exception of completing contractor evaluations and ensuring oversight by contracting officer representatives, which of these compliance metrics currently measures something the Army should be prioritizing?
There are many other important processes we could be measuring (i.e., “quality metrics”). We could measure industry input on requirements. We could build a proposal difficulty score that rates how hard it is for vendors to participate in federal contracts, reflecting proposal sizes and evaluation sub-factors. It should be easy, with commercially available software, to score the readability and comprehensibility of requirements documents. We could measure how many of the best-value trade-off source selections (whereby we can exchange higher prices for improved performance) end up awarded to the lowest bidder (possibly due to fear of protest). How often are the new 2019 National Defense Authorization Act acquisition authorities being used? These are just potential pre-award areas to measure. There are many others.
On a related note, one measure the Army currently uses to determine the size of the contracting workforce is by how many dollars an organization puts on contract and how many contract actions they execute. This method not only fails to track the labor-hours type tasks, such as time spent inputting data into the Synchronized Predeployment and Operational Tracker, Trusted Associate Sponsorship System and Joint Contingency Contracting System —a measure independent of “dollars and actions”— it also could create an incentive for individual contracting employees and organizations to prioritize the number of contracts they award over the number of high-quality contracts they award (i.e., quantity over quality). We cannot analyze whether or what extent there is an exchange of contract quantity for contract quality unless we develop and evaluate the quality metrics.
METRICS NEVER TELL THE WHOLE STORY
A true culture change in Army Contracting would require us to acknowledge what can be measured or streamlined and what cannot. A prime example: The work of a contracting officer (KO) is to provide expert services, but expert services are notoriously difficult to quantify A recent article in the Journal of Behavioral and Experimental Finance defines a market for expert services or “credence goods” as one where there is “asymmetric information between the expert seller and his customer regarding the fit between the characteristic of the product and the needs of the customer” (e.g., experts such as auto mechanics, surgeons and attorneys). That article, “Credence goods in the literature,” provides this definition and outlines the two fundamental problems in markets for credence goods, i.e., that the expert could fail to provide sufficient effort or provide more effort and time than needed without the customer’s knowledge.
The way to monitor and improve KO performance is analogous to how one would evaluate other experts. KOs determine the content quality of contracts and the process for source selection, lead negotiations and draft decision documents on claims that are quasi-judicial and require independent KO judgment. Each scenario is as novel as the requirements, and what’s best or fastest isn’t ascertainable using any existing decision tree. It is hard to tell both during and after contract formation whether the KO did a good job. However, a KO’s poor performance may manifest in very consequential ways in terms of dollars and performance.
It is hard for non-experts to tell whether a KO exerted the right amount of effort, in the same ways that it is hard to tell whether an attorney billed too many hours. Rating a contract based on complexity beforehand will not provide someone an easy answer for how much effort is required, because the factors that create that complexity are often unique. It’s neither possible nor desirable to attempt to reduce the entirety of the contracting workload to something that one can determine upfront in terms of labor-hours.
Finally, the right effort—and right amount of it—should be up to other experienced KOs to analyze. Commissioned officers who serve in this role should concern themselves primarily with becoming experts who can perform and critique these aspects of contracting, to ensure that the workforce remains focused on the Soldier and capability overmatch.
OBSERVATIONS ON THE WORKFORCE AND IMPROVING OUTCOMES
- Implementing new systems will stymie the workforce in the short term. Atul Gawande, a surgeon and public-health researcher, wrote a 2018 article for The New Yorker titled “Why Doctors Hate Their Computers,” in which he described the “revenge of the ancillaries.” The struggle is the result of the system design choices being more political than technical: Those doing medical billing have different concerns than doctors do, but the recommendations of the administrators become part of the software the doctors must use (to their irritation).
This is a useful analogy for the burden of new enterprise resource planning software on the workforce. We should better forecast updates of the software and enable the average worker to provide suggestions on improving systems. A solution might be to form a single U.S. Army Contracting Command office that seeks input when creating Army systems and consolidates Army workforce input for other DOD systems.
- The Army contracting community should consider identifying members of the workforce who focus primarily on the repetitive “labor-hour” type tasks associated with contracting, possibly designating them as purchasing agents or procurement technicians. That way, if those numbers go down, the Army should either assign more technicians or provide more training. This could help alleviate the tension between the dual demands for contracting speed and more data input.
- Leaders should distinguish tasks as either routine or requiring expertise. The Financial Times in 2019 ran the article “Law firms’ love affair with the billable hour is fading,” in which they rated different firms on their ability to get away from the billable-hour model to other methods of pricing (i.e., quantifying) the expert services they provide. The winner? The Financial Times found that Accenture’s legal department was able to cut its costs by 70 percent by creating two workflows: the “complex” contract workflow, handled by senior attorneys and the “transaction” workflow handled by offshore junior attorneys using automation. While government employees can’t be “offshored,” we should realize that we have some expensive, highly trained employees doing some very repetitive tasks. That’s not acquisition on a war footing.
- All discussion about the workforce is ultimately about resources. The dollars-and-actions method of allocating resources has serious flaws. It measures the wrong things, fails to measure the right things and doesn’t account for novel situations requiring expertise.
Increasing performance metrics in contracting is a worthwhile goal, but the application of expert abstract knowledge to diagnose and resolve novel problems is inherently difficult to measure. To change the culture of Army Contracting, we should improve the metrics we have, reduce our deference to the flawed ones and facilitate data gathering. Ultimately, however, expertise is what will truly improve outcomes.
MAJ. BRIAN J. BURTON is a warranted contracting officer at Army Contracting Command – Rock Island, Illinois. He received his J.D. from the George Washington University Law School in 2014 and is an associate member of the Virginia State Bar. He received a B.A. in philosophy from Arizona State University. He is a member of the Army Acquisition Corps and holds Level III certification in contracting as well as Level I certification in program management.
Subscribe to Army AL&T - the premiere source of Army acquisition news and information.