Our Evaluation

NAS is committed to rigorous review of all that we do. We want to know if what we offer works well or not, and how and when improvements can be made. We create programs as an integrated set of services – to reach the right people, at the right times, in the right format for perspective-changing experiences.

The process of evaluation is critical to doing the most good for the greatest number of people. Evaluation is fully integrated into all NAS program designs from the outset. We define learning objectives and desired program outcomes. We engage program participants and the broader arts and culture community in conversations that form the core of our learning. We look at the immediate, short-term and long-term outcomes and make changes in real-time to meet the needs of our participants. We are always learning.

How do we learn?

We learn in three stages. Immediately following our convenings, seminars and online courses, we gather information about participants’ satisfaction with the experience, the relevance of the content and details about their intentions for using what they learned. Next, we follow up to learn about actions related to the learning experience leaders took after returning to work. Finally, we check in several months later to learn what changes came from the program experience.

All of our evaluation is conducted confidentially. We only share this information in aggregate and never publicly identify any participant's results.

How do we test?

We begin by requesting full participation in our evaluation efforts by all program participants. Following convenings or online courses, we send an online evaluation questionnaire to each participant. We resend the questionnaire to non-respondents until the response rate is greater than 90%. In internal debrief meetings scheduled within a week of each event, the NAS team discusses evaluation results and direct observations by those in attendance and considers opportunities for program improvement. The program director then discusses opportunities for improvement with the faculty.

After one month has passed, we contact the participants to ask questions about actions taken since the program. After three or more months have passed, we contact the team leader again for a description of what change the leaders has been able to effect, if any. Periodically, we mine the data we have collected to gauge the efficacy of our programs and to look for patterns – positive and negative.

We track participant responses using the internet-based evaluation platform, Clicktools. This allows us to easily transfer and store evaluation responses in our customer relationship management database (Salesforce). This configuration enables us to efficiently manage data collection and analyze results over time and across events.

How do we use what we learn?

Based on our evaluation results, we have changed content and teaching methods in our programs, tried and altered new programs such as online learning events and maintained contact over time with program participants to learn – sometimes to our great delight – how much they do with what they learn. This last area of organization change is the most difficult to accomplish, but it is the purpose of NAS’ work in the cultural field. Measuring it in a way that allows us to respond is one of the keys to our effectiveness. 

In addition to finding encouraging data about participant satisfaction and program effectiveness, we collect and react to information about program aspects that participants would change and barriers to implementing concepts learned in NAS programs.