For the first time in years, the overall score across all agencies reviewed on the Federal Plain Language Report Card dropped, and some agencies received scores below a C-minus.
“What was found is that unfortunately, the average grade for writing and information went down by about 11 percent this year,” said Rep. Dave Loebsack (D-Iowa) in an Oct. 12 conference call. “I will say, as a former college teacher, I wouldn’t be happy with that. I’m not sure the American public should be either.”
The news wasn’t all bad; the Agriculture Department received top grades for both reviewed documents, and the Social Security Administration got an A-plus for its FAQ page. These, plus the departments of Defense, Education and Veterans Affairs, plus the General Services Administration, all managed to improve their grades in the past year.
But the good news was largely overshadowed by some of the lowest scores in years.
“I was disappointed that for the first time in a few years, we saw two agencies receive D-plus grades,” Loebsack said. “I guess it goes to show that while some agencies continue to excel, there really is still room for improvement.”
The departments of Housing and Urban Development and Treasury both received D-pluses on their FAQ pages, while Commerce also got a D-plus for its infographic. While HUD wasn’t graded last year, Commerce, Treasury and nine other departments and agencies received lower scores than last year. No agency has scored lower than a C since 2014.
This is the sixth year that the Center for Plain Language, a non-profit organization dedicated to the proliferation of clearer, more understandable language in government and business, has released this report card as a means of creating accountability for agencies to follow the Plain Writing Act of 2010.
“We’re trying to keep the importance of plain writing on the front burner for agencies, even in this small form of accountability, and really reward the ones that produce some good examples,” said Dr. Chip Crane, the center’s corporate secretary and lead for the federal report card project.
Crane said it wasn’t clear why the scores were lower this year, but he did advance a few possible theories.
“Maybe this being a transition year, with a lot of change, of course … maybe it hasn’t been as much on the front burner for some agencies because of all the other transitions,” he said.
He also suggested that a change in the type of documents reviewed could have been a factor.
In the past, the Center has requested agencies submit two public-facing forms or documents for review, and occasionally obtained them on their own when agencies didn’t respond. But this year, instead of requesting forms, the Center chose to review agencies’ frequently asked questions pages and one example of an infographic.
“The frequently asked questions page is almost at the heart, the essence of the plain writing concept, because you have a customer, a person in the public who is coming to that agency, most commonly of course on a website, with a question and the agency has anticipated that,” Crane said. “An infographic, this was a decision we made to include because increasingly, visuals are a big part of communications. And infographics are growing in popularity as a genre, you might say, or a document type, to communicate effectively with.”
He said he would have expected FAQ pages to get higher scores than forms or documents frequently weighed down by bureaucratic language.
“It may be that some agencies take frequently asked questions pages for granted,” Crane said. “I’m just speculating. Maybe they’re focused on other pages, other reports and things, and they’ve got that FAQ page, even though it should be the most audience-friendly. Maybe they’ve worked to get their forms a little more clear to improve compliance. It could be that the graders had a higher expectation of a frequently asked question page.”
The Center included examples of comments some graders made on both the high-scoring and low-scoring documents. One grader pointed out that the Social Security Administration’s FAQ page actually seemed to answer questions users asked, rather than what the agency thought they should ask. Contrastingly, a grader noted that the Transportation Infrastructure Finance and Innovation Act’s FAQ exclusively used the acronym “TIFIA” while never saying what it stood for, but the first piece of information it did provide was the statute code for the act.
Meanwhile, they reacted well to large font and clear, helpful details on infographics, but graded poorly on details that were unrelated or lacked context.
The volunteer judges scored the agencies based on seven criteria. The criteria were:
Understanding the audience;
Manner or voice;
Structure and navigation;
Information design and presentation;
Pictures, graphics and charts (if applicable).
The graders gave scores between one and five, with five being the highest “excellent” score. Each form was graded by at least two readers and the final score was the average.
The Center also added half a letter grade for on-time submissions that included thoughtful and thorough descriptions of audience and purpose.