aefenglommung (aefenglommung) wrote,
aefenglommung
aefenglommung

Nobody asked me, but . . . Part V (and last)

Finally, we come to the local church in our thoughts on restructuring The United Methodist Church. We do this frequently, at least in the sense that we rename all the committees. But there is little that the overarching organization can do to affect the life and order of the actual congregations. Laypersons will practice their faith as they will or can; pastors will preach, offer the sacraments, and provide leadership as they are able.

But there is one area where we could help the local church, and that is to give it a yardstick for self-evaluation which could be understood. Evaluation criteria and schemes come and go, but most congregations never benefit from them; likewise, we occasionally hear of Conference-led evaluation projects, which are mostly rejected in fear that they will be used to close small churches' doors. The basic problem is, all the evaluation tools used by the pros (clergy, General Agency flacks, writers of books on Church Administration) use vague, cloudy terms that are not understood by the people. It is no surprise they resist being measured by such instruments, or that their attempts to use them usually yield just what they've always done before.

Back in the day, I was a "Camp Accreditation Specialist" (we don't call them Inspectors any more) for the Boy Scouts. Each summer, I would lead a team of volunteers (with one professional advisor) in visiting Boy Scout summer camps, usually in their first week of operation. Each camp had been given a set of objective criteria by which camp had to be run, including health and safety rules, management practices, site setup, credentialing of leadership, conduct of program, etc. Certain items were described as mandatory.

We would begin by taking a tour of camp. We would poke our noses into kitchens (looking for hot and cold charts, sanitation practices, etc.), into showerhouses (looking for popoff valves and ground fault interruptors), into campsites (looking for hazards). We would visit each program area to see that it was properly set up and that kids were having fun. After touring the entire camp, we would return to the office and go over the Director's "control book" -- the 3-ring binder in which everything from letters of agreement with local EMS services to the staff's CPR cards were kept.

Then, we would go down the evaluation criteria one by one. Were there opportunities to go fishing? Were there sufficient shower heads? Were all program areas clean and free of hazards? Were staff in uniform? Had the camp menu been approved by a licensed dietician? And so on. If the camp satisfied all mandatory criteria and had a score of 75% of the total number of criteria (usually about 100), then the camp was fully accredited and given a banner to display to show its achievement of the quality standard set by BSA. If the camp satisfied all mandatory criteria but had a lower overall score, it would be conditionally accredited, and we would counsel with the leadership on what they needed to do to improve. If a mandatory criterion was failed, we had the authority to demand improvement on the spot, as well as they authority to close that area -- or even the whole camp -- if we deemed it necessary.

I thought that was a good system for helping volunteers understand and apply the measures of good camp management and program. My opinion was reinforced by what I experienced in my last year of classes for my doctorate. Our doctoral program was undergoing accreditation by our regional agency, and guess what -- they used a criterion-based evaluation system pretty much like the Boy Scouts used. Well, if that's what they use to make it clear to PhDs what is required, then it must be pretty high grade, you know?

So, here's my suggestion. We need a set of criteria to evaluate effectiveness for a United Methodist congregation. Some items would be mandatory (from placement of fire extinguishers to the frequency of celebration of the eucharist -- and not to forget receiving at least one new member and paying all apportionments); others would be recommended (starting a new Sunday School class, having a regularly-updated website . . .). The criteria would be specific enough and yet flexible enough so that churches of all sizes and settings could succeed; no one kind of church would be used as the measuring stick for all others.

In the Spring of the year, teams trained by the District Lay Leader and composed of other laypersons would fan out over the District to visit congregations by appointment. They would attend worship and Sunday School. They would meet with the pastor and key leaders and go over the criteria for evaluation. If all mandatory criteria were met, and the congregation achieved an overall score of 75%, then that church would be given a certificate and duly recognized for the quality of its ministry. At the same time, the church would be given counsel on any items they were deficient on; if the overall score was too low, the District could target them with specific help in their areas of need. If any mandatory criteria were failed to be met, they would be put on a watch list for immediate improvement, with consequences to follow if these were not met.

In Fall, the contents of this annual evaluation would be a significant basis for discussion at Charge Conference. The church would be expected to show improvement in any items listed at the Spring meeting where there was serious deficiency. If improvement were not shown, then that would be sure to be emphasized in the next Spring's visitation. A congregation that regularly failed at the same criterion -- especially a mandatory one -- would be required to fix the problem or be referred to a committee with the power to recommend sanctions to the Annual Conference, up to and including closure of the church.

In my experience, this kind of evaluation system consistently yields good results. The threat of sanctions has to be real, but I have only once ever had to consider using them when visiting camps. What actually happens is that people who are given an understandable list of things that must be in place by a date certain usually rise to the occasion and put them in place. And when people are evaluated by their peers using language that all can understand, they'd rather earn praise than try to make excuses that they know won't fly.

Final note: The last time my Conference was talking about doing evaluation of local churches, I actually wrote up a sample system of the foregoing and sent it to the Conference COM Director. It was never heard from again. I didn't even receive a note acknowledging receipt.
Subscribe
  • Post a new comment

    Error

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 0 comments