This Etherpad:
Goal:
- To build in badges and assessments to the Webcraft Challenges
- To identify the requirements for the learning environment to support the assessments and badges for Webmaking 101
Webcraft Challenges:
- http://webmaking101.p2pu.org/ << now abondoned in favour of pad http://pad.p2pu.org/webcraft-challenges
- Already aligned with Objectives (he is calling them Disciplines now): http://webmaking101.p2pu.org/disciplineshttp://pad.p2pu.org/webcraft-challenges-badges
- Each challenge isn't summative for one particular skill but combines various skills in each
- For example, a challenge might cover basic html, javascript and css - in order to complete the challenge, you need to have all
- People don't have to do all of the challenges in a series, can jump in at any point, although we want to provide a pathway for people to do them in a series to get the full experience
- Each challenge will be a separate study group (but its own "type" of study group ("Challenge" with specific features) (dev: Zuzel)
- Jamie's vision was to have multiple levels of each badge/skill that are achievable within each challenge - so someone might do more excelled work for a challenge than another, both are valid solutions, but one might be HTML level 5, whereas the other is HTML level 1.http://pad.p2pu.org/webcraft-challenges-badges
- Pros of this approach:
- Gives learners a clear learning path
- Allows learners to get something for each effort, but still have room to grow
- Gets us away from the summative all or nothing badges
- Cons/limitations of this approach (at this stage):
- Complexity - this adds significant complexity to the badge system - managing the various levels of badges, etc.
- Burden on assessor - this requires that the assessor make judgements about the level of the work, which is not as straightforward and could discourage some peers from assessing
- Could this be done via the rubric? Like "used X" is level 1 but "used Y" is level 2?
- Expert requirement - same reason as above, the assessor needs to be at a top level of mastery for each skill to be able to judge the appropriate level. We know from the pilot that most participants are not experts
- PS: My sense is that this is too complicated for now, but happy to be dazzled by your ability to make it easy if you think we can really pull it off in two weeks. Push to post-pilot?
- CV> Question: if you complete a challenge are you considered an expert in that challenge? If so, then maybe as part of completing the challenge you have to assess someone else that is taking the challenge. i.e. after I succesfully completed "hello world" can i assess another peer that has submitted the "hello world" challenge? You can imagine a stretch on that were, as part of the course there is a challenge about assessing another NPG peer, basically training to assess peers as part of the challenges.
- EK: So yes, I think that you are definitely more qualified to assess the work once you've been through a challenge (and we hope to get people to assess after they've completed them so any thoughts on how to make this work are most welcome) - but I don't think they are necessarily expert enough to make judgement calls about the level of the work. I could complete Hello World in a very basic way and then I have completed it, but doesn't mean I know enough to really look at someone elses implementation and say, I think that's Level 5 HTML, etc.
Badge and Assessment models/work:
Badges for Webcraft Challenges - Integration Points:
Erin's initial thoughts:
- Each challenge has at least one badge aligned with it, some will have several that align with the objectives/disciplines
- Each challenge will have a task after the challenge tasks where the learner can submit the total work from the challenge for the badges
- Badges will pull somewhat from http://webmaking101.p2pu.org/disciplines but will have skill badges aligned (suggested list to come from Erin/Chloe)
- Still considering:
- Have leveled badges aligned with the series of challenges
- So Challenge 1 might have HTML level 1 badge aligned with it and Challenge 5 might have HTML level 2
- Each badge within the challenge has a rubric aligned with it (Template provided by Erin/Chloe, rubriks developed by Jamie, reviewed by Erin/Chloe)
- Need these rubrics to be robust and comprehensive but also super simple and approachable since there may be several for one challenge (since there may be several badges)
- Even though there might be separate rubrics for the different badges, could combine as one for the challenge to make it easier for the assessor and just handle the tallying/algorithms for the badges on the backend
- Peers/mentors will assess the work for each challenge against the various rubrics, badges are issued when established threshold is met
- Completing all of the challenges or some combination/aggregation of the skill badges == the Webmaking 101 Badge
Jessica's initial thoughts:
- A challenge is a special project type that can be used to change the UI (like how the tasks are displayed or show a 'progress bar' or etc.) (I think Arlton is working on the UI/UX side)
- For the first phase, some things to think about are how this group has a different signup than others and any UI-type differences. Zuzel's already thinking about this.
- Task has new completable status: user run. User can say "I have finished this task." It is up to the user and no one else verifies. Verification is done when awarding a badge. (Completion status is probably optional to first phase.)
- Yes, this is essentially a self-assessment at the individual task level except for the last badge task where they submit their completed work
- PS: Yes - I like this even if we don't link to badges logic for now.
- The badge awarding part: (Couple of ways we could do it are sketched out below)
- 1) Can create a special 'badge' task (task that is marked as a badge and links to what badge it will create based on how many votes and/or of what vote type and/or by which voter) that has votable interface. It could be task #5 for example that the person posts a comment like "Finished this challenge. Here are the links to my work."
- I think just by submitting something for this task, the person is trying for the badges
- After 2 peers vote (yes? 4-5? average is 3.5?) then badges for that challenge is awarded (need to figure out the algorithm, but I do think we should use the rating 1-3 instead of the binary yes/no vote from the peer assessment)
- or 2) Or can have the system help out: If have 'completion status' on each task, then after all tasks are complete, the system asks the user for links to all the work in order to present it to the peers for voting.
- not sure about this one - what's this in reference to? The badge is either earned or not? Do you mean for the webmaking 101 badge?
- Or those projects/courses that are of type "challenge" or later when we have a 'badge-able' task or group. Just trying to cut back the number of steps and giving us 'completed' as a status. But it's probably too ambitious for 2 weeks.
- PS: Probably too complicated for now.
- Peer voters are presented with the rubric? voting scale? with the applicant's information (links to submitted links)
- Does the rubric change per badge or is it generic? If generic, then maybe showing what the applicant was supposed to do would be helpful for the voter. That way there's less clicking/moving back and forth.
- So, within a challenge, there will only be one rubric, even if there are multiple badges. There might be different sections of the rubric or something to handle HTML related stuff versus CSS related stuff, for example, but just one rubric. Rubrics between challenges will obviously be different.
- 9/9 JL: Since we are going to have "I've completed this" as a status on a task, should we also show a copy of the rubric before letting the person click "done"? Might be hard since the rubric is on the whole challenge and not just that task though.
- Ideally, the peer assessor is presented with rubric as assessing. So perhaps the assessor clicks a button that says "Assess this work" (or something" and then presented with the rubric, rating and text box. The assessor has check boxes next to each element of the rubric and works through it, checking off those things that the user has met. Once through that, rates the overall work 1-4 and leaves comments about the rating. Clicks submit.
- The system tracks the rating and if the required threshold for the badge is met, issues the badge
- The user can see the assessment information which includes the information from the rubric (what was checked as met and what wasn't), the criteria and the feedback
- All of this, including the user's original submission of work, is eventually linakable from the badge at the evidence URL
- QUESTION: how do peer badges fit into this? Should still be peer-to-peer issued, so we could have another task that says "issue this badge to the peer in this study group that best meets these criteria" + comment but are people really going to be working through these together? Maybe we need to think about peer badges differently for the challenges - maybe they are more based around the peer assessment - so I can issue someone a mentor badge (or something like that) for helpful feedback in an assessment?
Philipp's initial thoughts
- Let's keep it simple
- I'd even consider limiting to one skill badge per challenge - so in challenge 1 we might only offer you the HTML badge (even though you learned other things). Also happy to see multiple badges per challenge, but the UX seems (too?) complicated.
- Don't need levels within badges for now (seems complex and not enough time to really think through)
- If you want to add some level of guru assessment, we can probably do that. We are building an "Adopt a challenge" feature. It is only open to people who have completed the challenge. If you adopt a challenge, you will get notifications if other people are working on them, so you can go and help if someone gets stuck. We will seed this with some friends, so there are alread people we could consider gurus.
- Interesting. Do you think we will have enough of this and commitment from them to build assessments around it? We could essentially not require guru assessment but weight it differently so that maybe it counts as 2 peer ratings or something in the algorithm for badges.
- Question:
- What about community badges? Would love to see a first version/ pilot.
- Will users rate themselves using the rubrik when they challenge for the badge? If the UX is not too complicated, I think that would be useful.
- Probably not in this round, I think we need to think about how that may/may not influence the assessors.
Features / Requirements
Existing features pulled from http://pad.p2pu.org/badge-integration - not sure how to include all of this here since this is a rewrite of my features. Tried to just work with your rewrite below.
ROLES
(1) Badge Admins (new role on p2pu)
- To create badges
- To create and assign assessments
- For this phase of the pilot with Webmaking 101, we will be the only ones creating badges
- PS: Is this an actual admin interface, or could we create these manually for first round?
- JL: I think we can create manually first round
- Django creates admin interfaces pretty easily though, no? This is what Zuzel did in OSQA. Could create a badge and had all the fields to put in the specifics.
(2) Study Group/Course Participants (or users that want to get a badge)
- For the first phase, badges will require users to submit summative work from the challenge, as an additional, and review each others work. See peer assessment below.
- Some badges will be awarded peer to peer - ERIN: need to figure out how community/peer badges play in
CHLOE: 3 WAYS TO AWARD A PEER BADGE:
- Within the Task: you can have a pledge a badge (submit your work for someone to assess it) and also nominate someone for a badge (assess peers on their work)
- Outside the Task, similarly to the adopt a challenge or even included in that one, peers can assess other's work and issue badges.
- Within the Activity Wall: you can nominate someone for a badge from the activity wall, under a post.
- Subsequent phases: Some badges will be awarded to them automatically (based on metrics tracked in p2pu)
- Stealth badges awarded based on metrics such as logins
- TASK: what are metrics gathered, what do we want to be gathered? Jessica/Chloe
ASSESSMENTS LOGIC
- First phase of Webmaking 101:
- Self assessment:
- Learners determine when they have completed each of the challenge tasks
- Learners nominate themselves for a badge, from the ones available for that challenge.i.e. In challenge 2 there is a HTML newbie and a Superblogger badge available, I nominate myself for the HTML newbie, don't really care for Superblogger. A mentor will now assess if I deserve the HTML newbie badge.
- Peer assessment:
- This kind of assessment logic will allow users to get recognition from their peers
- Learners will submit their work at the end of each challenge as the last task
- Learners will be asked to give feedback to another peer, and nominate them for a badge at the end of each challenge as one of the last task.
- Any peer can review
- If there are people that have adopted the challenge, we could weight their assessment more heavily since they would be considered gurus
- Peers will review submitted work against the rubric, checking off each element of the rubric that is met, then rate the total work from 1-4
- Rubric is different for each challenge and coming from Jamie/Erin/Chloe (saw the note at first part)
- Can a peer reevaluate? For example if the person submits and it's voted a 1 and then changed (how to store/show that) then the peer can reevaluate. Is it overwritten or do we keep a history somehow. (Feel like I asked this... going to look through the other pads.)
- I worry that the Ux is going to get too complicated too quickly - we want there to be the ability for people to fix their work but not sure how to handle this in the Ux. If the original work changes, could notify the assessors and have them go back in and edit their assessment?
- CV Suggestion: the ratings 1-4 are tied to the rubric, so it becomes easier for the assessor, this depends a lot on how elaborate the rubrics are.
- Comment/justification required with the rating
- Need to define the algorithms but badge will be awarded at some threshold of ratings
- Different per badge? No should be the same for all skill badges - 3 ratings required with an average of 3 or higher or something like that (and maybe guru or adopters votes could as 2 ratings...)
- Criteria: for this phase, are we defining these per invidual badge?
RE: ALGORITHMS {DOWN THE ROAD}:
1) ASSESSMENT COMBOS: A badge could be made out of different assessment types, such as mentor, peer, guru and stealth. Here is a quick breakdown of the different combinations for assessment:
Badges can be awarded by (Bold the ones I think we will have for Web101):
- Mentors only
- Mentors & Peers
- Mentors & Gurus
- Mentors & Stealth
- Mentors & Gurus & Peer
- Mentors & Gurus & Stealth
- Mentors & Peers & Stealth
- Mentors & Gurus & Peers & Stealth
- Peers only
- Peers & Gurus
- Peers & Stealth
- Peers & Gurus & Stealth
- Gurus only
- Gurus & Stealth
- Stealth only
- OPEN QUESTION: which ones from these will we use for the Webmaking 101?
- Example of a Mentor Only?
2) ASSESSMENT VALUE: Different assessors have different value, i.e. a mentor's assessment weights in more heavily than a peer's.
Badges can be awarded based on (note: numbers are indicative):
- OPTION 1: a total score coming from the following conditions:
- A mentor's rating equals 3 times the number of stars in the rating, i.e. 3 x rating of 2 stars = value of 6 pts
- A guru's rating equals 2 times the number of stars in the rating
- A peer rating equals 1 time the number of stars in the rating
- A stealth rating equals .5 time the number of stars in the rating
- Example: To earn a Communicator's badge you need a total of 18 pts. A communicator badge can be awarded by Mentors and/or Gurus and/or Peers. A Mentor has given you 3 stars ( 3 x 3 = 9 points ), two Peers have given you 4 stars ( 2 x 4 x 1 = 6 points ), no guru has assessed you. total value = 18 pts and you get your badge.
- OPTION 2: assessor combinations for different badge types (numbers are indicative):
- Skill badge: you need 1 Mentor & 1 Guru to nominate you for the badge and give you a rating higher than 3
- Peer badge: you need 5 Peers to nominate you for the badge and give you a rating higher than 3
- etc
NOTE: I see problems with both these options, such as creating a point system might prove not to be sustainable when community numbers change, i.e. if you have 10 participants vs 300. A number of participants value could be added to the algorithm as a solution.
With the second option we don't cover badges than can be awarded from both peers, gurus, mentors etc. Could be solved by having custom setting for each individual badge, i.e. i want the condition for this badge to be that x mentors rate it higher than 2 and y peers higher than 3.
- Evidence behind the badge will be submitted work and comments from the peer ratings
- Note: there may be more than one badge that peers are assessing for in the final challenge task
- Note: can we have different types of tasks - one type that is the self-check task and one that is the work submission/peer assessment final task?
- I believe so. This sounds like the self "I completed this" and then at a certain amount of "I completed" (which will probably be all tasks in that challenge) the user will be submitted for the peer assessment process.
- Later phases:
- Guru assessment:
- Some badges will require us to be more restrictive with repect to who can evaluate a candidate (i.e. trigger a badge award).
- Only those that have the badge can review and rate the work
- Honorary guru assessment:
- For seeding purposes (so guru assessment can start), we will need a way to issue a badge to someone directly even if they have not completed a challenge (see guru assessment for the reason we need this)
- Badge admins will be able to do this and the evidence will include who awarded them the badge and why (text filled out by the badge admin).
- Stealth assessment:
- On the first badge pilot we did not add badges that were awarded based on stealth assessment but the software that we customized for the pilot had badges like these to encourage certain behaivours among their users (e.g.http://badges.p2pu.org/badges/23/pundit).
- This kind of assessment logic will allow us to automatically award badges based on metrics collected inside p2pu
- The initial stealth assesments available will be based on the metrics we are collecting already collecting and we will have to start collecting data specifically related to the user behaviour around badge challenges.
- Examples of things we could give recognition for with stealth assesments are: visited course pages X numbers of days in a week, posted X numbers of comments, reviewed X numbers of submitions.
BADGES
- First phase badges: TBD from Erin/Chloe
- These are from the first 5 challenges? Yes CV working on this
- Skill badges - may earn multiple from each challenge
- Peer/community badges - still not sure how these play in yet CV working on this http://etherpad.mozilla.com:9000/webcraftassesmentsep2011
- Webmaking 101 badge - from completion of each challenge (this need to be a different badge type in the system - not tied to peer assessments but instead based on total # of self assessment completions)
- not tied to peer assessment on each challenge?
- Do people get lots of little badges or one big webmaking 101 badge? how to know what level of css one gets from that challenge so that the css badges are the same(?)
- People will get skill badges and peer badges based on assessments in the badge tasks within each challenge. The Webmaking 101 badge is an additional badge that is awarded once all of the challenges are completed. Not tied to the skill or peer badges at this point.
BADGE TASKS (within each Challenge)
- Creating a badge task will include:
- Filling out a title and description.
- Selecting and configuring the assessment logic. CV:should we have a template for that, is that referring to peer, mentor, guru, stealth? or the conditions that determine how those will be awarded, i.e. need a rating higher than 3, awarded by 2 peers?
- Choosing the badge that will be awarded to those completing the challenge.
WORK SUBMISSIONS
- First phase: badges will require candidates to submit a piece of work for review.
- Submissions will use rich text (meaning users can embed things like github gists, ... as part of their submission and upload files).
- We need versioning (like we have on tasks) for submissions because users can go back to improve their work (don't think we need to reset the votes after the user posts a new version if it is clear on the evidence page which votes correspond to each version).
- EK: I like this if we can make it work - but could also notify each assessor if the work changes and ask them to go review again and edit their assessment (versioning on assessments?)
- if a peer votes again in a different version the old vote will be override in the count but still appear on the evidence
- Users should be able to "delete" a work submission (and restore it). data will remain on the database but it will not be displayed. -- the # of deleted submissions and versions can be included as a metric too.
- A message saying this submission was deleted will appear on the evidence page in case the user already sent the badge to the OBI
- If the user has not send the badge to the OBI the badges earned for this submission will not be displayed or sent to the OBI
- If the user have not earned the badge the submission will not be visible for review and thus the badge will not be awarded.
- Second phase:
- Submissions go automatically to the challenge adopter's queue. Challenge adopters earn badges i.e. for responding fast to requests for badges and giving helpful reviews.
WALL ACTIVITIES
- Activity around badges will be on the existing 'Default', 'Learning', and 'All' filters of the walls (groups/courses, dashboard, profiles)
- A new filter will be included just for activity related to Badges (named 'Badges').
- How the deletion of walls mentioned on http://pad.p2pu.org/webcraft-ux affects this
- Second phase:
- Wall activity could include the ability to award a badge from the entry on the wall, like yelp does? i.e. on the activity wall i see Erin's link to a blog post, tagged as challenge 2 submission. I can click on that and award her from inside the activity wall a series of badges that are available for me to award (based on my status as a peer or a mentor). This is a UI issue.
- Wall activity could include social metrics such as likes, hearts, round of applause etc. these social metrics could accumulate in stealth badges, such as popular, active etc These could also be referred to as social badges.
AWARDED BADGES
- Earned badges are displayed in the P2PU profile
- Would be cool to have a notification that tells the learner when they have earned a badge (like on OSQA)
- Some badges are pushed to the OBI
- Need images for the badges
- Need criteria pages and evidence pages - need to meet the requirements of the OBI: http://openbadges.lighthouseapp.com/projects/80353/badge-manifest
- Second phase:
- integrate idea of "player dossier". A space to showcase badges, roles, smartools, stats etc. A space to track your progress and compare to how others are doing.
OPEN BADGE INFRASTRUCTURE
METRICS
- We will have to report metrics to evaluate the use of badges in p2pu (to superusers and school organizers)
- It can include:
- # badges issued.
- for those badge challenges that require candidates to submit a piece of work, # of submissions and number of people that submitted work
- for those badge challenges that requiere people to rate submissions, # evaluations and average rating
SMARTOOLS
- Upon completion of certain challenges and when awarded the skill badges, users "unlock" a tool, such as "text editor"
ROLES
- Upon completion of a study group/set of challenges different combinations of badges unlock differnet roles, such as the HTML whisperer, CSS ninja.
- Leveling up in roles unlocks the ability to "adopt a challenge" and level up to become a guru or a mentor.
- TASK: Chloe will work on some roles for the Webmaking 101
- JL 9/13: Still working on this?
++++++++++++++++++++++++++++++++++++++++++++++++++++
Open questions:
- How do we handle skills across various challenges - can we have levels? Does that make sense with the challenges? Can we make leveling align with specific challenges (versus basing it on the quality of work within any challenge)
- How many peer assessments are required for the badges? What are the algorithms?
- Badge design - we are talking about a bunch of additional badges, we'll need to have someone design them
- OBI interface - which badges are pushed in to the OBI? Are lower level badges or just the aggregated badge?