User talk:Raza ur Rehman and Talk:Learning and Evaluation/Connect/Community Dialogue: Difference between pages

From Meta, a Wikimedia project coordination wiki
(Difference between pages)
Latest comment: 9 years ago by Djembayz in topic Metrics for public support
Content deleted Content added
Meta-Wiki Welcome (talk | contribs)
m Adding welcome message to new user's talk page
 
 
Line 1:
''Please participate here! Please add your thoughts regarding the five questions into the appropriate sections below. If you have something else to address, create a new section!''
<div class="borderc6 backgroundc2" style="border-style:solid;border-width:medium;padding:0.3em 0.5em;color:lightgrey">
<div align="center">{{Welcome/lang}}</div>
== Welcome to Meta! ==
<!--{{<includeonly>subst:</includeonly>void|{{error:not substituted|welcome}}}}-->
<div class="lang-en" lang="en" style="color:black">
Hello, {{PAGENAME}}. Welcome to the '''Wikimedia [[Meta:About Meta|Meta-Wiki]]'''! This website is for coordinating and discussing all [[Complete list of Wikimedia projects|Wikimedia projects]]. You may find it useful to read our [[Meta:Policy|policy page]]. If you are interested in doing translations, visit [[Meta:Babylon]]. You can also leave a note on [[Meta:Babel]] or [[Wikimedia Forum]] if you need help with something (please read the instructions at the top of the page before posting there). Happy editing!
</div>
</div>
 
='''<big>Program Evaluation</big>'''=
-- [[User:Meta-Wiki Welcome|Meta-Wiki Welcome]] ([[User talk:Meta-Wiki Welcome|talk]]) 16:31, 20 May 2014 (UTC)
 
==Evaluating the program evaluation and design capacity-building initiative==
'''How should we evaluate and report on the program evaluation and design capacity-building initiative? What measure of success would you find ''MOST'' useful for assessing progress toward team goals?'''
<br />''Enter comments below this line''
 
* Much of the current documentation regarding program evaluation considers activities from the perspective of what they're called, or what they are; an edit-a-thon is one thing, an editor workshop is yet another thing, and so forth. It may be worth considering from the perspective of the desired outcome. What can we do to get more editors? What reduces the backlog of outstanding work on a project? The best measure of success may involve identifying all the different problems people are trying to solve—both Wikimedia affiliate organizations and purely online groups like WikiProjects on the English Wikipedia. [[User:Harej|harej]] ([[User talk:Harej|talk]]) 04:43, 17 May 2014 (UTC)
 
==Evaluating learning opportunities and resources==
'''Examining the evaluation learning opportunities and resources made available so far, (a) what have you found to be most useful in supporting program evaluation and design and (b) in what evaluation areas do you feel there is not enough resource support?'''<br />
 
===(a) What have you found to be most useful in supporting program evaluation and design?===
<br />''Enter comments below this line''
 
* The opportunity to speak with evaluation experts and to have my work looked over by people who know what they are doing has been most helpful. [[User:Harej|harej]] ([[User talk:Harej|talk]]) 04:43, 17 May 2014 (UTC)
 
===(b) In what evaluation areas do you feel there is not enough resource support?===
<br />''Enter comments below this line''
 
* Attention needs to be paid to development of governance. Different groups and organizations have different needs but the ability of an organization to carry out serious program evaluation depends on the basics—being able to recruit volunteers, being able to prepare a budget, being able to run a business. Running events is enough of a volunteer burden; running events as part of a broader plan requires additional volunteer work, and getting volunteers to do this depends on being able to effectively recruit and manage them. Many nonprofits invest heavily in volunteer training and Wikimedia should be no exception. [[User:Harej|harej]] ([[User talk:Harej|talk]]) 04:43, 17 May 2014 (UTC)
 
==Evaluating beta report metrics==
'''Examining Evaluation Reports (beta), please share about the (a) strengths and (b) weaknesses of the metrics piloted.'''<br />
 
<big>''To discuss metrics more in-depth,'' please participate in the metrics brainstorm and prioritization we began at the Wikimedia Conference. <br />
'''Follow this''' [[Programs:Evaluation portal/Parlor/Metrics Brainstorm|'''link to an OPEN DISCUSSION''']] about the next steps for measuring program impacts.</big>
 
===(a) Strengths of the metrics piloted===
<br />''Enter comments below this line''
 
* Report defines terms used and projects reported on clearly. Our collective lack of progress on program evaluation is really well explored. It's really fascinating just how behind the curve we've all been on program evaluation. We were never expected to do this kind of work, but the report discusses baseline expectations of measurement and now I am interested in seeing what we learn over the next year. [[User:Harej|harej]] ([[User talk:Harej|talk]]) 20:01, 18 May 2014 (UTC)
 
===(b) Weaknesses of the metrics piloted===
<br />''Enter comments below this line''
 
* I feel like there is more going on than the report would have us believe—could future evaluation reports mine the annual reports filed by the affiliate organizations? I am also interested in learning what return we get on our money. One could spend $30,000 on a program, but who says we get more out of that than we would spending $30,000 on a new server or better wiki software? We need comprehensive metrics for the entire WMF budget because otherwise we have numbers without context. [[User:Harej|harej]] ([[User talk:Harej|talk]]) 20:01, 18 May 2014 (UTC)
 
='''<big>Grantmaking</big>'''=
 
==Evaluating grantmaking to groups and organizations==
'''How should WMF Grantmaking evaluate its various grantmaking initiatives to groups and organizations (e.g., [[m:Grants:APG|Annual Plan Grants]], [[m:Grants:PEG|Project and Event Grants]])? What strategies and/or metrics would you recommend for assessing ''grantmaking to groups and organizations''?<sup>''([[m:Programs talk:Evaluation portal/Parlor/Dialogue#2. Evaluating grantmaking to groups and organizations|DISCUSS here]]'')</sup><br>'''
<br />''Enter comments below this line''
 
* Grant proposals should be categorized according to what kind of program they want to implement. If a chapter seeks a grant to host an edit-a-thon, it should be flagged as an edit-a-thon themed grant and they should get the resources needed to carry it out. This kind of pre-emptive categorization ensures sufficient support from the beginning and it makes it easier to incorporate their findings into future reports. At the same time, make sense of the diversity of program proposals ''within'' a category. Edit-a-thons geared toward building relationships with institutions function differently from edit-a-thons geared toward recruiting individuals to edit Wikipedia, and edit-a-thons held in Nairobi are going to be different from ones held in New York or Bangalore. Now if a group comes along and has a completely novel proposal for something that's never been done before, that's an opportunity to work with them to flesh out the program's goals and pilot test something that could ultimately be replicated by other organizations. Grantmaking should be responsible for lead generation for Program Evaluation and Design. [[User:Harej|harej]] ([[User talk:Harej|talk]]) 20:16, 18 May 2014 (UTC)
 
==Evaluating grantmaking to individuals==
'''How should WMF Grantmaking evaluate its various grantmaking initiatives to individuals (e.g., [[m:Grants:TPS|Travel scholarships]], [[m:Grants:IEG|Individual Engagement Grants]])? What strategies and/or metrics would you recommend for assessing ''grantmaking to individuals''?<sup>''([[m:Programs talk:Evaluation portal/Parlor/Dialogue#3. Evaluating grantmaking to individuals|DISCUSS here]]'')</sup>'''
<br />''Enter comments below this line''
 
 
=Other comments and suggestions=
==Metrics for public support==
Somewhere in all of this evaluation we need to look at whether our activities build public support for the encyclopedia and open knowledge. We need general public support to ensure that the Internet remains usable and open, and to ensure that we have the financial backing from the public. Much of the necessary financial, political, and cultural support needs to come from people who are readers, users, enthusiasts, but not necessarily high-edit-count Wikipedians themselves.
 
Solid metrics that measure what activities generate public support would require some understanding of public opinion research. My guess is that it might be more cost-effective at this point to view ongoing public outreach as part of how you do business rather than get sidetracked with polling and surveys.
 
We also need to consider that in-person events can be important for building relationships between people who are supportive of Wikipedia, and look at finding ways for people to contribute who aren't in a position to be power editors, so they can remain involved with the project as their circumstances change.
 
Maintaining the health of the organization is a part of the mix if we want sustainability. The "contribution to organizational health and sustainability" piece isn't a big part of our evaluation efforts so far. At some point, we might need to look at what other volunteer-based non-profits are doing with regards to evaluation, accounting, and transparency. Even if these tasks are performed pro-bono, it may be in the Foundation's interests to check the qualifications of the people performing them. My guess is that some of these core tasks are typically better handled by paid staff, and that the trick is getting a system that's simple and modular enough that it's easy for the volunteers to add their input. [[User:Djembayz|Djembayz]] ([[User talk:Djembayz|talk]]) 02:43, 20 May 2014 (UTC)

Revision as of 16:38, 20 May 2014

Please participate here! Please add your thoughts regarding the five questions into the appropriate sections below. If you have something else to address, create a new section!

Program Evaluation

Evaluating the program evaluation and design capacity-building initiative

How should we evaluate and report on the program evaluation and design capacity-building initiative? What measure of success would you find MOST useful for assessing progress toward team goals?
Enter comments below this line

  • Much of the current documentation regarding program evaluation considers activities from the perspective of what they're called, or what they are; an edit-a-thon is one thing, an editor workshop is yet another thing, and so forth. It may be worth considering from the perspective of the desired outcome. What can we do to get more editors? What reduces the backlog of outstanding work on a project? The best measure of success may involve identifying all the different problems people are trying to solve—both Wikimedia affiliate organizations and purely online groups like WikiProjects on the English Wikipedia. harej (talk) 04:43, 17 May 2014 (UTC)Reply

Evaluating learning opportunities and resources

Examining the evaluation learning opportunities and resources made available so far, (a) what have you found to be most useful in supporting program evaluation and design and (b) in what evaluation areas do you feel there is not enough resource support?

(a) What have you found to be most useful in supporting program evaluation and design?


Enter comments below this line

(b) In what evaluation areas do you feel there is not enough resource support?


Enter comments below this line

  • Attention needs to be paid to development of governance. Different groups and organizations have different needs but the ability of an organization to carry out serious program evaluation depends on the basics—being able to recruit volunteers, being able to prepare a budget, being able to run a business. Running events is enough of a volunteer burden; running events as part of a broader plan requires additional volunteer work, and getting volunteers to do this depends on being able to effectively recruit and manage them. Many nonprofits invest heavily in volunteer training and Wikimedia should be no exception. harej (talk) 04:43, 17 May 2014 (UTC)Reply

Evaluating beta report metrics

Examining Evaluation Reports (beta), please share about the (a) strengths and (b) weaknesses of the metrics piloted.

To discuss metrics more in-depth, please participate in the metrics brainstorm and prioritization we began at the Wikimedia Conference.
Follow this link to an OPEN DISCUSSION about the next steps for measuring program impacts.

(a) Strengths of the metrics piloted


Enter comments below this line

  • Report defines terms used and projects reported on clearly. Our collective lack of progress on program evaluation is really well explored. It's really fascinating just how behind the curve we've all been on program evaluation. We were never expected to do this kind of work, but the report discusses baseline expectations of measurement and now I am interested in seeing what we learn over the next year. harej (talk) 20:01, 18 May 2014 (UTC)Reply

(b) Weaknesses of the metrics piloted


Enter comments below this line

  • I feel like there is more going on than the report would have us believe—could future evaluation reports mine the annual reports filed by the affiliate organizations? I am also interested in learning what return we get on our money. One could spend $30,000 on a program, but who says we get more out of that than we would spending $30,000 on a new server or better wiki software? We need comprehensive metrics for the entire WMF budget because otherwise we have numbers without context. harej (talk) 20:01, 18 May 2014 (UTC)Reply

Grantmaking

Evaluating grantmaking to groups and organizations

How should WMF Grantmaking evaluate its various grantmaking initiatives to groups and organizations (e.g., Annual Plan Grants, Project and Event Grants)? What strategies and/or metrics would you recommend for assessing grantmaking to groups and organizations?(DISCUSS here)

Enter comments below this line

  • Grant proposals should be categorized according to what kind of program they want to implement. If a chapter seeks a grant to host an edit-a-thon, it should be flagged as an edit-a-thon themed grant and they should get the resources needed to carry it out. This kind of pre-emptive categorization ensures sufficient support from the beginning and it makes it easier to incorporate their findings into future reports. At the same time, make sense of the diversity of program proposals within a category. Edit-a-thons geared toward building relationships with institutions function differently from edit-a-thons geared toward recruiting individuals to edit Wikipedia, and edit-a-thons held in Nairobi are going to be different from ones held in New York or Bangalore. Now if a group comes along and has a completely novel proposal for something that's never been done before, that's an opportunity to work with them to flesh out the program's goals and pilot test something that could ultimately be replicated by other organizations. Grantmaking should be responsible for lead generation for Program Evaluation and Design. harej (talk) 20:16, 18 May 2014 (UTC)Reply

Evaluating grantmaking to individuals

How should WMF Grantmaking evaluate its various grantmaking initiatives to individuals (e.g., Travel scholarships, Individual Engagement Grants)? What strategies and/or metrics would you recommend for assessing grantmaking to individuals?(DISCUSS here)
Enter comments below this line


Other comments and suggestions

Metrics for public support

Somewhere in all of this evaluation we need to look at whether our activities build public support for the encyclopedia and open knowledge. We need general public support to ensure that the Internet remains usable and open, and to ensure that we have the financial backing from the public. Much of the necessary financial, political, and cultural support needs to come from people who are readers, users, enthusiasts, but not necessarily high-edit-count Wikipedians themselves.

Solid metrics that measure what activities generate public support would require some understanding of public opinion research. My guess is that it might be more cost-effective at this point to view ongoing public outreach as part of how you do business rather than get sidetracked with polling and surveys.

We also need to consider that in-person events can be important for building relationships between people who are supportive of Wikipedia, and look at finding ways for people to contribute who aren't in a position to be power editors, so they can remain involved with the project as their circumstances change.

Maintaining the health of the organization is a part of the mix if we want sustainability. The "contribution to organizational health and sustainability" piece isn't a big part of our evaluation efforts so far. At some point, we might need to look at what other volunteer-based non-profits are doing with regards to evaluation, accounting, and transparency. Even if these tasks are performed pro-bono, it may be in the Foundation's interests to check the qualifications of the people performing them. My guess is that some of these core tasks are typically better handled by paid staff, and that the trick is getting a system that's simple and modular enough that it's easy for the volunteers to add their input. Djembayz (talk) 02:43, 20 May 2014 (UTC)Reply