Elected and appointed officials in the field of economic development are often long on the benefits of proposed spending to stimulate growth but very short on the follow up to find out if the programs actually work.
That’s not new news. Pennsylvania’s Legislative Budget and Finance Committee (LBFC) did a report in 2000 that served as a fairly comprehensive performance audit on DCED’s programs and found the monitoring of efforts ranged from "rigorous to none" and that the Department rarely "verif[ied] the accuracy of jobs data". More recently, the Auditor General looked at the Opportunity Grant program and found lax effort to check on job creation numbers.
So the new Pew Center on the States report on evaluating how well states do in monitoring their incentive programs is treading on familiar ground. It is a very ambitious undertaking, ranking all 50 states and DC. All states spend the money to attract and retain jobs, the report notes, but there is great variation in how well the states do in monitoring the effectiveness, the "bang for the buck" of the incentives. There should be more emphasis on measuring whether what is promised is what is delivered, and there should be consequences for not achieving targets.
By reviewing documents and interviewing officials, Pennsylvania comes out in the middle of the pack, along with 11 others, labeled as having "mixed results". Under the dual evaluation of "scope" and "quality" PA got high marks for having a schedule for reviewing programs and determining whether incentives are achieving stated goals. The problem is that the rating came from the review of two documents from LBFC, one on tax credits and one on Keystone Opportunity zones.
Besides the 14 tax credits reviewed in the LBFC report that Pew referenced a quick look at DCED’s program and funding finder shows that 53 links for "loans" and 61 links for "grants" come up, meaning that PA might be rigorous on a fraction of its incentive programs reviewed by Pew. The same could hold true for the other 49 states based on what the evaluators examined, meaning they could look rigorous or lax in relative standing. This does nothing to include the plethora of incentive programs states might have authorized for their local governments to carry out but are not monitored by the state itself.