Sunday, November 22, 2009

Honor for NonprofitSOS

This blog was selected by the Daily Reviewer as one of the top 100 nonprofit blogs!

Top 3 Weekly Blog Posts for Nonprofit Workers

1. "Evaluating Online Donation Service Providers" by Step by Step Fundraising

2. "Fundraising from Out-of-State? An Update on Registration Issues" by Kivi's Nonprofit Communications Blog

3."More on Charity Boards and Tough Times" by Nonprofit Law Blog

Bonus: "We overestimate the gap between nonprofit and for-profit jobs" by Penelope Trunk's Brazen Careerist - This post is a bit older (10/30/09) but has some interesting thoughts on nonprofit vs for-profit jobs

Saturday, November 21, 2009

Exploring Effective Strategies for Facilitating Evaluation Capacity Building

This AEA session was of particular interest to me. I would love to see more nonprofits investing in building their capacity with evaluation, and this session discussed ten strategies to do so:

  1. Coaching/Mentoring: building a relationship with an evaluation expert who provides individualized technical and professional support
  2. Technical Assistance: receiving help from an internal or external evaluator
  3. Technology: using online resources such as websites and/or e-learning programs to learn from and about evaluation
  4. Written Materials: reading and using written documents about evaluation processes and findings
  5. Training: attending courses, workshops, and seminars on evaluation
  6. Involvement in an Evaluation Process: participating in the design and/or implementation of an evaluation
  7. Internship: participating in a formal program that provides practical evaluation experience for novices
  8. Meetings: allocating time and space to discuss evaluation activities specifically for the purpose of learning from and about evaluation
  9. Appreciative Inquiry: using an assets-based, collaborative, narrative approach to learning about evaluation that focuses on strengths within the organization
  10. Communities of Practice: sharing evaluation experiences, practices, information, and readings among members who have common interests and needs (sometimes called learning circles)
See posts about other sessions I attended at this year's AEA: "American Evaluation Conference Summary Post"

Wednesday, November 18, 2009

Unique Methods in Advocacy Evaluation

This AEA session discussed common advocacy evaluation methods:

  • Stakeholder surveys or interviews - Print, telephone, or online questioning that gathers advocacy stakeholder perspectives or feedback.
  • Case studies - Detailed descriptions and analyses (often qualitative) of individual advocacy strategies and results.
  • Focus groups - Facilitated discussions with advocacy stakeholders (usually about 8-10 per group) to obtain their reactions, opinions, or ideas.
  • Media tracking - Counts of an issue's coverage in the print, broadcast, or electronic media.
  • Media content or framing analysis - Qualitative analysis of how the media write about and frame issues of interest.
  • Participant observation - Evaluator participation in advocacy meeting or events to gain firsthand experience and data.
  • Policy tracking - Monitoring of an issue or bill's progress in the policy processes.
  • Public polling - Interviews (usually by telephone) with a random sample of advocacy stakeholders to gather data on their knowledge, attitudes, or behaviors.
And highlighted four new methods that have been developed specifically to address advocacy evaluation's unique challenges:
  • Bellwether methodology - Interviews conducted with "bellwethers" or influential people in public/private sectors whose positions require that they track a broad range of policy issues. Part of sample is not connected to issue of interest and sample does not have advance knowledge of interview topic. Used to assess political will as outcome, forecast likelihood of future policy proposals/changes, assess extent that advocacy messages have "broken through", and to gauge whether an issue is on federal/state/local policy agenda and how it is positioned.
  • Policymaker ratings - Advocates (or other informed stakeholders) rate policymakers of interest on scales that assess policymakers' support for, and influence on, the issue. Used to assess extent to which a policymaker supports an issue and whether that support is changing over time.
  • Intense period debriefs - Advocates are engaged in evaluative inquiry shortly after a policy window or intense period of action occurs. Used when advocacy efforts are experiencing high intensity levels of activity and advocates have little time to pause for data collection.
  • System mapping - A system is visually mapped, identifying the parts and relationships in that system that are expected to change and how they will change, and then identifying ways of measuring or capturing whether those changes have occurred. Used to try to achieve systems change.

Please note that the above notes are credited to the "Unique Methods in Advocacy Evaluation" by Julia Coffman and Ehren Reed.

See posts about other sessions I attended at this year's AEA: "American Evaluation Conference Summary Post"

A day in the life of a nonprofit worker

1. What is your name, organization and job title (you don't have to give your name/organization if you don't want to- it can be anonymous)
Martin Wera – Nonprofit Services Manager, Charities Review Council

2. What is the first thing you do when you get in the office?
Put my lunch in the fridge, check email, check my calendar, and check to see if any nonprofits have finished the Accountability Wizard (the online educational tool the Charities Review Council has for nonprofits). After that, it varies from day to day.

3. How do you spend your lunch break?
During baseball season – check the updates about the Twins. Usually though I check the MinnPost Daily Glean, Politico, and any other news updates. Often I’ll check some nonprofit blogs as well.

4. Which part of your work do you enjoy most?
One of the best things about my job is the opportunity to connect and work with a variety of nonprofits. Depending on which organizations are going through a review, every day is different. Not only is the organization different (e.g. size, issue area, etc.), but also questions that they have about the review and the Accountability Standards. I also enjoy the fact that I feel like in working with nonprofits meet our standards, I’m part of the process of helping them be more effective, healthy organizations.

5. Please finish this sentence: If someone wanted my job, they would have to…..
…be a nonprofit geek.

6. What advice or tips do you have for other nonprofit professionals in your position?
This is trite, but true – do what gives you energy. Having worked at a variety of nonprofits, this has been the clearest lesson I’ve learned. From this point, everything else falls into place.

I am looking for people to participate in this series, if you are interested, please email me - kristen@advancementcompany.com

Tuesday, November 17, 2009

How do we define and measure social impact?

This month's Nonprofit Millennial Blogging Alliance (NMBA) topic relates to social impact and how we define and measure it.

So, what is social impact? Well, I did what anyone that has access to internet would do, I googled it. It seems there isn't really a clear, precise definition for it. I couldn't even find a definition on Wikipedia - the closest I got was Social Impact Assessment or Social Impact Theory. So, I am going to go with a mish-mash of definitions and partial definitions I found:

Social impact = the influences or affects an organization or group can have to impact people's lives. This influence or affect increases with immediacy and strength, and can have both positive and negative social consequences.

So, to use an easy example: More and more people continue to join Twitter because they know more people who are on Twitter, their close friends are now on Twitter, and everyone seems to be joining Twitter. Hence, one would say the social impact of Twitter is quite large and continues to grow as its strength and immediacy grows.

For nonprofits, this would be used more in the sense of how a nonprofit taking advantage of social change to make a difference in people's lives.

So, how would one measure social impact?

Well, since social impact is more that just evaluating the effectiveness of an intervention it would make sense that a simple evaluation wouldn't be enough.

An interesting concept I came across was that one could put together an impact map, which will help organizations to clearly show relationships between inputs (resources) and outputs (activities, outcomes). Basically it helps an organization understand how they create change.

The impact map could be combined with a social impact assessment, which "includes the processes of analysing, monitoring and managing the intended and unintended social consequences, both positive and negative, of planned interventions (policies, programs, plans, projects) and any social change processes invoked by those interventions. Its primary purpose is to bring about a more sustainable and equitable biophysical and human environment." This would allow a nonprofit to map the relationships and measure the change that resulted from those relationships.

A more government-type perspective on social impact assessment can be found here. Some may also go as far as measuring the financial return on a social impact using a social return on investment.

Check out some other perspectives on social impact and how to measure it from NMBA bloggers:

What is Social Impact? by Nonprofit Periscope

Measuring Social Impact (wait…what is social impact?) by Onward and Upward

Thursday, November 12, 2009

Interactive Techniques to Facilitate Evaluation Learning

This was an interesting session that I attended at the American Evaluation Association's Annual Conference. It had some great tidbits. Here are a few things I wanted to share from the session:

The presenter discussed what portion of things people learn, and how they learn them. This is what she shared:
- People remember... 10% of what they read (book, handout)
- 20% of what they hear (head a lecture, podcast)
- 30% of what they see (look at displays, diagrams, exhibits)
- 50% of what they head AND see (live demonstration, video, site visit)
- 70% of what they say OR write (worksheet, discussion)
- 90% of what they do (practice, teach)

Manipulatives help learning!
- Manipulatives are objects that engage the learning in touching, feeling, and manipulation
- Stimulate brain either as part of the learning experience or provide opportunities for movement
- Examples: basket of strange feeling objects, pipe cleaners, clay, cards, paper table covers that people can doodle on

Current research establishes a link between movement and learning!
- Can use brain breaks, energizers to get people moving
- Example of energizer: when asking questions use movement "Raise your hand/clap if you use Twitter"

See posts about other sessions I attended at this year's AEA: "American Evaluation Conference Summary Post"

American Evaluation Conference Summary Post

I am currently in Florida attending the American Evaluation Association (AEA) conference. To follow conference related tweets, search #eval09 on twitter.

The days are jam-packed with fantastic sessions and I likely won't get to post all of the interesting and useful tidbits until this weekend and early next week, but I am going to get them all up by the end of this week. This post will include links to all of the posts, as I post them:

Sunday, November 8, 2009

Nonprofit Conference Etiquette

Last week I attended the Minnesota Council of Nonprofits joint annual conference with the Minnesota Council of Foundations and had the opportunity to meet lots of new great nonprofit folk, in addition to presenting two sessions "Become Social Media Savvy" and "Evaluation 101: Focus Groups and Surveys".

While at this conference a group of were hanging out chatting about donor meetings and who should be going (a whole other post), when a woman from nonprofit came up to us and started explaining what her nonprofit did. This was great because who doesn't love to hear about what all the amazing nonprofits in their community do? Once she was done explaining, she continued on to ask us to give. When we politely declined, she went on to a group sitting next to us, gave the same speech, and asked them to give. After they declined, she left the area and I can only assume went to solicit more gifts.

Typically I never mind being asked for a donation because nonprofits need money to run. But, I think many conferences actually state in their rules not to mention it is against conference etiquette to solicit your colleagues at a nonprofit conference. I mean, didn't she realize that pretty much everyone there worked for a nonprofit organization, and if they all decided to go around and solicit we would have had over 1,500 people asking for gifts? It would have been mayhem not to mention annoying, and would likely result in people not going. So, the next time you go to a nonprofit conference, remember that this is the "safe space" that all of us can come together to learn - not your opportunity to solicit your colleagues.

p.s. I will be posting more info from sessions from the conference, in addition to stuff from next week's American Evaluation Conference. I will also be posting slides from my two sessions on my website on Monday.