Badge and Assessment Design and Advisory Group
++++++++++++++++++++++++++++++++++++++++++++++++++
Call in info:
US Toll Free: +1 877 395 2347 **Please only use the toll-free number if you really need to*
US Local / International +1 415 763 5901
"NEW" Doc (V2, this has been shared since Nov 2011)
https://docs.google.com/document/d/1iYbHTj35SbtOgolTf3k1z7tiYEHZuzv7w4RVOdw9LEc/edit
Philipp/Erin v2.1:
https://docs.google.com/a/p2pu.org/document/d/1EJe36jE1sXsNpl_Yv5mwnVEi0DP87tP0rx9SmHd0sbQ/edit?authkey=CPbOmLQJ&hl=en_US
OLDDoc ("The Original"):
https://docs.google.com/document/d/1TCUt9wD6OA-eIvtsXf6NIawI_MH11SWB83qDdZqQQZw/edit?hl=en_GB
++++++++++++++++++++++++
Sharing of the article beginning Nov 4
By Nils
- Gary Brown, Portland State U
- Theron DesRosier, Washington State U
- Steve Ehrmann, TLTGroup
- June Ahn, University of Maryland
- Jayme Jacobson, Univ of Idaho
- Early feedback - want clarification on the audience for this piece
- David to get:
- Chris Haskell Boise State
- Zarhina Merchant Texas A&M
- other doc students...
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
Weekly Mtg
Jan 11, 2012
Attending
Discuss proposed next steps:
- Get Erin to do a full review
- Frame as general paper that is suitable for a broad audience interested in the future of online education?
- Get this group to make sure you are still happy with it
- (Optional) - Submit to journal
- Question -> What is audience? Is it suitable for an academic journal?
AECT [more narrow focus / instructional design]
EDUCAUSE Quarterly [widely available and read but would need to be shorter]
CACM
"First Monday" [no peer review / feedback]
Euro Journal of Open and Distance Learning
There is an IFIP meeting in England coming up - deadline for submission is soon - that is more international...and so a conference presentation and then following journal JCAL article is a possibility.
Several Jounrals liike CITE, JTATE, etc have a narrow focus on teacher education and technology
Discussion / feedback / thoughts on latest changes
- Re-ordering the framework bullets
- Revisit connection between re-ordered bullets with Chloe's document
- Otherwise, everyone is happy to re-order
Weekly Mtg
Dec 21
Attending
Nils who is looking for Alex who appears to be in the document
David sent his regrets
Philipp suggested (by email) that we congratulate ourselves and he will look for a means to get the document an editor and a single voice. He suggested we meet Jan 11, 2012 to review the work of the editor.
Weekly Mtg
Dec 13 (a tuesday because the 14th won't work)
Attending (via a google hangout)
Nils
David
Alex
We worked all the way down the document, resolving most of the comments and leaving a couple with questions.
We plan to meet the regular time next week, Wed 21, 10AM Pacific, but need specific guidance.
A remaining question is to Philipp regarding getting support for an editorial 'once over' to bring all the writing into a single voice.
Weekly Mtg
Dec 7
Attending
Nils
Alex
Chloe
David
Alex did some wordsmithing and work in the front of the document
We decided that we need to very explicity say that when we use "peer" we don't mean other students as in a clasroom model. We simply mean a non-hierarchical relationship among learners -- anyone in the CoP at any level of expertise.
Let's reiterate "peer" in different parts, emphasize what we mean by referring to the definition. David and I felt that at least one reader who kept returning to classroom model and typical school meanings of peer. (See notes from Nov 30)
User stories:
Should we use real people instead? That would give us more credibility, however we are not suggesting that they are representative of an ethnographic research.
Should we drop the user stories? They add to the document.
Authenticity adds to the document.
TASKS
- Nils will go thru the whole document looking at the uses of "peer" and trying to reinforce
- Chloe will work on a younger person user story: probably not untill next week though / will reach out to Dreamyard /
- Alex will work on a story from a student of his
- Nils will address the comment he and David left Nov 30 about a page into The Case for Peer Assessment and look at moving the text
- Next week, instead of Wednesday, Nils & Alex (& David?) will meet to go through the document
- After community review we should have one person re-vise the doc in one writing style
QUESTION: Philipp, we have some concerns that the document does not in a consistent voice and wonder about giving it an editorial going over.
Weekly Mtg
Nov 30
(last week got cancelled because of Thanksgiving)
Attending
Nils
David
- We spent some time thinking about "deeper learning" and ratifying our choice to continue using the term because we found a variety of organizations using the term in ways that appear highly congruent.
- We also clarified the focus on how to assess deeper learning skills -- and that there is a different purpose to talk about connected learning as a medium. We think the focus of the paper is *how to build an assessment system* for these hard to assess skills in online social settings, eg P2PU
- We resolved several other comments that Philipp and others had left in the Executive Summary.
- We stopped at a paragraph that begins "However, this introduces a new issue concerning peer-based assessment" leaving notes about perhaps moving it.
- We did not explore into Philipp's various comments in the case study section.
QUESTION: Do we need to very explicity say that when we use "peer" we don't mean other students as in a clasroom model. We simply mean a non-hierarchical relationship among learners -- anyone in the CoP at any level of expertise.
Observation: We used Google+ Hangout for the audio channel and it worked quite well when we stopped sharing video of ourselves. Video was nice to see David's friendly face, but after that he and I were really focused on the text of the document so it was less necessary.
WEEKLY MTG
Nov 16
Chloe
Nils
David
Carla
Alex
Discussing plans for getting more feedback
Current Feedback:
June Ahn: More concrete examples, how do we actually do this? Maybe reach out to people who can provide us with case studies?
Gary Brown: its not adding a whole lot of new things, maybe we are trying to get it in the hands of a new audience instead of breaking a new ground? For those new audiences we should be really clear about what we are adding.
Jayme Jacobson: clarification on the audience
Reflections:
- What we are suggesting is that we are collecting everything that is out there regarding assessment and putting it on the table.
- We are not adding something really new, its intended to be a synthesis, its intended to create a recipe for platforms like p2pu.
- The macarthur approach says is that there is not that much out there that is for a broader academic audience. We suggest that we do that, approach the audience beyond educators.
- Suggestions: get feedback from others like Cathy Davidson + Sheryl Grant!. Integrate the feedback we have already made. REDUCE rather than add (the paper feels too long?)
- One more week of sharing internally and then sharing with p2pu community, broader audience.
Adding notes in the document, leave marks, so we invite commenting.
Question: are we writing it for a handful of 4-5 people? the Hewlett foundation? (philipp question) Should we cut it down 1/3? LESS IS MORE :)
Next Steps
Nils: call Gary and Theron regarding their comments to get clarity about their comments. Try Steve Ehrmann who was program officer at Annenberg/CPB & FIPSE, now retired. Review section 8 on evolving the assessment system.
Everyone: ask others for feedback, reach out to doctoral students
Chloe: reach out to Philipp > scoping the audience
WEEKLY MTG
Nov 9 - cancelled
WEEKLY MTG
Nov 2
Alex
Carla
Chloe
David
Nils
Feedback on case study: Matt case study addressing all 8 components. Do we need all 3 case studies?
What do we gain fron the other examples; diversity of learners with different types of needs.
To show that it's not only about web dvelopment. The wider the audience the better people perceive it.
If we have three case studies we don't have to necessarily hit all eight points in all eight studies. Reads better without the notes. One thing to do before we get rid of them would be to match them when we create mentions in the different sections.
Feedback on section 7 : include learner characteristics, review together
Who do we share it with?
open url
academics? global public response? both
first phase: share draft, community involved in the review of the document
develop this repository of ideas to
How does the document get published? Digress.it commentpress, etc. Broad: Short version to Educause,More detailed in assessment journal...
Question for Philipp: what do we do next? after reviewing it where do we publish? is the paper only tied to Hewlett Grand?
Next Steps
Nils: Review area around the Donald Schon reference and decide to expand the pp or delete it
Alex: will clean-up document, review conclusion
Chloe: edit 2 remaining case studies
David; will look at conclusion section
Email checkin in 24 hours, get some outside reviewers
Each of us think of couple of peers to review
WEEKLY MTG
Oct 26
Attendees:
David suggested working "backward" from Chloe's document to ours. She organizes the 8 points we made into a structure with concrete examples and guidelines. Our document would benefit from more examples, Chloe marked some places to consider. We may want to borrow from Chloe's document to the Framework doc.
Nils will tackle conclusion
Case for peers section needs work to be more effective
Contours of the Learning Community - needs elaboration to help Chloe -- Alex will try to trace out how assessment goes into this
Knowledge component + practice component (plumbers+craft) - unique vocabulary (speciailst language) + methodologies
Walk the whole document and tie back to deeper learning and assessment & tie back to the case studies.
Next Steps
Alex: make small tweaks, add more text on / section 7
Nils: Conclusion
David: Clean up Case for Peers / take design guidelines and link to main doc (examples?)
Chloe: link user stories to 8 components
WEEKLY MTG
Oct 19
Attendees:
Discussing V2 document
David, added new pp's in G & H
Section above there was too long. Chloe & David proposed moving to an appendix
Keep numbers A,B,C
There are 3 heading 1's. - Philipp is re-arranging
David write a short version of the evidence centered design example
NILS: work on the 7 (8) items in the Exec Summary, round off the rough edges.
Examples: David to edit his current long example down; Chloe to point at places where she'd like more examples
Conclusion: Philipp says "connect back to work Chloe is doing, tying theretical framework back into design guidelines and implentation of the guidelines. We took this big step thinking about assessment in these environments, here are pointers and suggestions on how to move toward implementation"
In conclusion bring all 8 principles together. These are a design guide guidelines. make it an invitation to next step. Wrap the concepts together again
CHLOE to share a design guide document she is creating. 3 overarching dimensions: social,dynamic, personal https://docs.google.com/document/d/1qWC4TzCGZI7i-NEeVAFtoAtXB3fSQeJKCi5rSVDSURw/edit
DAVID to share Seminal document by John Seely Brown early 90's. They outline online social spaces that support communities of practice. Bridge to our title, need assessment as an element of the spacce -- how assessment drives it forward. Commnity and Assessment are different; both needed to support deeper learning (both helping the learning happen and documenting that it has happened)
P's thaoughs on relationship between design guide document (Chloe) and assessment framework document (trio). The two documents should test one another.
Next steps:
Table of Contents: philipp (and others by chaning the headings and sub-headings)
Chloe to add pointers were examples will be helping
David to narrow down example for SoW
David create appendix
Nils: Draft the Conclusion
WEEKLY MEETING
Oct 12
Meeting Cancelled
Nils forked the document (see new URL above) and did clean up work as proposed
WEEKLY MEETING
Oct 5
Meeting cancelled because several people were absent
WEEKLY MEETING
September 28 1PM ET
Attendees:
Agenda:
Alex's Proposal for Forward Motion:
1. Some agreement on 7 +/- principles (Alex's 7 paragraphs). Do we have that? Yes, I think so. Alex & I just reviewed (with wordsmithing, potentially)
2. Doc Structure:
a) Short intro
b) Principles a & b = executive summary that we have now
b2) a short concrete example here (user story) - CHLOE
c) Deeper learning yes, and serves an intro to why peers - NILS
d) P2P -> P2PU build the case for peers as path to deeper learning, P2PU as an example implementation -- NILS
- question for philip, how much do we need to bring p2pu in, do we bring it up as an exemplar? PS: use as example, but paper should be generally applicable
e) Discussion / Lit Review for each Principle
- each of the 7 principles as headers; rationale; literature; implementation notes (suggestions to a designer) - ALEX
f) Proposed framework for assessment - this is where the pieces come together. David has some of this. Show how implementation notes come together in a whole system -- DAVID
Audience for framework for assessment = designers,educators, online community?
g) Conclusion (including assessment of framework?) relative ease of understanding how this works in informal learning vs the barriers of formal education structures
2. Chloe will work on pulling together an exemplary user story (or 2 or 3)
3. Nils will pull together C& D
4. Alex will thresh out, reorganize section E as 7 subsections. Some redundancy, but can be worked out!
5. David will get F organized to draw together the 7 into a coherent system, and compared/contrasted with other frameworks of assessment.
NEXT STEPS
CHLOE: will work on pulling together an explary user story (or 2 or 3)
NILS: will pull together c) & d)
ALEX: flesh out, reorganize section E as 7 subsections. Some redundancy, but can be worked out!
DAVID: will get F organized to draw together the 7 into a coherent system, and compared/contrasted with other frameworks of assessment.
WEEKLY MEETING
September 21 1PM ET
Attendees:
- Erin
- Chloe in Amsterdam
- Philipp
- David
- Alex
- Nils
- Carla - ops manager for open badges project
Notes:
- Alex narrative is close to bullet points, categorise some pieces in the document under the bullets.
- Started with one outline but then added new outline, so should we start with one logic and then carry through?Start with list of characteristics and "redo" sections of paper?
- NEXT STEP: Start some doing some color coding of pieces to see what fits and what doesn't. Nils > instead of color coding use initials, start new document?
- Opening paragraph refers to deeper learning, we need a little narrative that defines deeper learning. Add the elements that define how to assess deeper learning. Tuning that narrative to the principles and then go ahead and move deeply in that.
- NEXT STEP: create a connecting paragraph between the principles and the bullet points. Use the bullet points to structure/drive the paper (it could be the headlines for the paper)
- How can the principles which are more theoritical match with the bullet points, which are more design elements? NEXT STEP: rebundle the bullet points.
- Table of Contents after paragraphs that act as an executive summary.
- Framework or practices? Framework is more of a theory, practices don't have to be mapping to a theory. "How people learn framework" can help us design practices?
- Feedback on bullet points:
- Domain of knowledge= structure of learning space
- Assessment Instruments= feedback
- Boundary objects = communication / communication vehicle (badge)
- Add a 6th bullet point about community? diversity
- Explain why the bullet points are useful?
- What is the question that the 7 bullet points provide an answer to? These are the design principles that define deeper learning assessment in peer online communities?
- Structure of Document
- What we want to achieve?
- Why is this important?
- Principles
- Online Learning too Broad; narrow to Peer-to-peer environments online.
- What are the advantages of online?
- Is Online necessary for peer-to-peer deeper learning?
- Is deeper learning possible without a peer environment?
- Yes, but not witout a social environment (eg classical PhD process)
NEXT STEPS
- Create bulleted list of 7 principles
- Create a connecting paragraph between the principles and the bullet points
- Nils takes next stab
WEEKLY MEETING
September 14 1PM ET
Meeting cancelled by mutual consent due to multiple schedule conflicts
WEEKLY MEETING
September 7 1PM ET
Attendees:
- Erin
Chloe traveling moved to London :) for a conference- David
- Alex - realistic optimist and soon to be dad to #2!
- Nils
- Carla - ops manager for open badges project
Notes:
- Trying to get some clarity about the changes to the document from last week.
- Specifications/bullet points - cater to the people that only get a page or two in
- Is this ready to be shared - not quite yet - wants to reach audiences like funders/decision makers, need solid handle on the issues yet but might not go very deeply unless hooked on what the key issues are
- What would be the bullets that would function as an executive summary? (in the beginning of the doc)
- Write something that is fairly short - what are all the elements that need to come together to get the point across
- Missing piece between the lit review and the framework - what's the structure and how do all the ideas/lit fit in
- Crucial and general piece - move it up to the front
- Alex's first stab of aligning some of the thoughts around the key elements
- Stabilize and then align the lit/thought work with it
- Chloe thoughts on doc : could we add a TOC(table of contents) in the beginning of the doc?
- The "social learning assessment framework needs" section is SO helpful, maybe we could start thinking of some "how does social learning assessment looks like?" , i.e. user cases
NEXT STEPS:
- Go back through Alex's work to see if these are the right elements at the beginning of the paper; agree on these before diving in
- Share this with a wider audience
- Once we agree, massage the individual sections
- For those interested, download an individual copy and slice it/dice it/etc to see how it fits the executive summary
- Is there a missing heading about learner characteristics that keep learners engaged?.
WEEKLY MEETING
August 31 1PM ET
Report on outcome of David checking with Phillipp about Sharing of the Framework Draft. When? -- Philipp says not yet, need more readable
Attendees:
- Erin - not expected
- Chloe
- Alex
- Nils
- Philipp
Agenda
- Erin's feedback
- Next steps - work on framework
- Sharing a draft in general, ok to be public, but P has a feedback first that may help others give feedback
Notes:
- Current document is a collection of pieces - but needs work to make sense of it. We have the groundwork, but we need:
- Clear specifications of what we want the framework to accomplish to lead off with
- Coherent foundational piece that speaks to these specifications
- We need the deep dive because we are exploring new ground
- Use cases / applications to help the readers understand / see the potential
- A more accessible version aka "whitepaper"
- What would these specifications look like? What are examples? (NOTE -> This was moved into the google doc)
- Easy to understand (grocked) by the local community, but highly translatable - boundary objects=something understood widely by the epistemic (local) commnity and holds value there and may be generally understood/valued in a wider community example police officers understand their badge deeply and its still also has some recognition in wider public context
- The community needs to determine what they mean, but they have to "count" outside of the community as well
- Boundary objects allow moving currency from inside the community to outside the community (webcraft: community = web developers, outside = employers)
- Need backup to show validity of your badges (pilgrim badges to get free housing -> led to people creating fake ones, recipients started demanding certificates to back them up)
- Peers incentives / motivations need to drive quality and validity of assessments
- Assessments need to have possibility for continuing improvements built in
- Assessments need to allow differnet forms of evidence (different solutions to the problem)
- Assessment instrument needs to be open and public and general enough to work for several tasks
- Provide a framework for crowdsourced development and filtering of assessments
- Provide encouragement for its own use and motivation for users and assessors
- How to include user stories
- Aim is to help people "get it" - highlight how this would make sense, and what's the potential
- Not necessary to feature P2PU specifically, but can do that if it helps
- Stronger focus on Peer assessment
- Encompasses a lot of other assessment forms (incl. to some degree "expert" assessment by other community members - how do we define who is the peer?)
- Peer assessment is the key feature that makes P2PU (and social learning) scalable
- Agreed to bring into the document more strongly
- There is a lot of literature on peer assessment within the academy, but almost nothing in informal learning
- Making peer assessment within communities explicit, valid, translatable
- Hewlett and other foundation view: key issue. reliable peer learning (inc assessment) online.
- Need to bridge two audiences for the paper. So far, the publications in this space focus on one of the two audiences:
- Academic / assessment experts - not accessible to policy makers
- Badges / implementers - very accessible, but lacks the academic foundation/ rigour
- Next steps:
- Move specifications (above) to Google Doc and clarify in bullet form
- Define, review, discuss, agree the specifications (everyone)
- Everyone to work in the google doc
- Next week -> Discuss / agree on basic set of specifications
WEEKLY MEETING
August 24 1PM ET
Sharing of the Framework Draft. When? Check with Phillipp. David will check.
Attendees:
Erin
Chloe
Alex
Nils
David
Phillipp
Nils: Daniel Pink video around motivation - relationships to P2PU and this work: http://www.fastcompany.com/1646337/science-shows-that-bigger-bonuses-create-worse-performance
automony (nature of the learning activities design), purpose and mastery as incentives of performance (Pink - video) - connects to P2PU - that instructional staff is voluntary...and for learners, how the badges system can embody these sources of motivation.
how to utilize strengths, interests and aspirations (all as forms of badges, or as supported d within the eco-system of badges - to create Purpose)
Forms of Learning to mix and match with strengths, interests, aspirations:
Exploring, Tinkering...as badge paths / mechanics for learning
"Explore"
"Tinker" Trial and error - building and testing
"Building/making"
"Problem solving?",
Hacking
Hanging out, messing around, geeking out (ala Mimi Ito)
"Assess" (e.g. literature review)
Alex: blog post on boy/eagle scout badges
It was more the process/experience - getting connected with mentors, etc.
Examples we see now lose that process - how can we bake in more of that process?
intrinsic v extrinsic motivation is not as clear of a delineation - the two can go hand in hand
motivate people not only to learn the things that they are there to learn, but also engage in the assessment as well (since we are mainly working with assessments)
Re achievement systems: something to check out is a collection of short papers in the game studies issue of last Feb, it's all about achievement systems http://gamestudies.org/1101
>this one in particular might be helpful, Re:process http://gamestudies.org/1101/articles/medler
What are you good at, what do you like to do, what do you want to do in your life - looking at intersection
badge system should reflect those interests, self-driven learning pathways/plans, etc
helping people to locate their passions (aspirational badges- what do you want to be good at, what is the community you want to join)
Orientation:
Anya K - DIY learning plan - launching on August 30 with small group
For SoW up in the air still since the model is changing slightly - might be that the orientation/learning plan should be a challenge
Nils is volunteering to get into that loop - Erin will follow-up with him about how to get involved in those conversations
Chloe:
Starts Septemeber 1st
Helpful for her: use cases for learners and organizers / experience / taxonomy of forms of learning (i.e. tinkering, exploring)
David: 4 steps of a cycle - (can jump back and forth among the 4 quadrants)
plan - gathering direction and forming purpose, aspirations , decide time scale
do/act - implement the plan. tools to help action
study - review & reflect. check on your plan. check on external context. how far have you come. is the plan aligned with the context
decide - on the next steps in the plan.
Paper:
A lot there - step back and see where we are at, where we need to dive more deeply, where we need to back off
Alex: fleshing out his section over the next couple of days, then will see where it fits with the rest of the paper
Goals for next week:
Do some building out/wrapping up of sections, then read through the rest of the paper
Next week will discuss how to start stitching it together - where there are gaps, inconsistencies in granularity, etc.
compare our thoughts around the paper in general - where we think it should go
Chloe read through the document as is and report back on usefulness
++++++++++++++++++++++++++++++++++++++++++++++++++
WEEKLY MEETING
August 10 1PM ET
NOTE: Next week's meeting is cancelled - too many conflicts
SoW Charter http://commonspace.wordpress.com/2011/01/18/draftwebcraftcharter/ Erin to send another if this draft is out dated.
Who is doing the assessment (peer, expert)? Why are they motivated to do the assessment?
Does assessent (by peers) serve as a mechanism to build community quickly? How does assesement help with driving participation?
Assessing participation in P2PU to encourage participation. Different than the learning validated assessment.
Expert coming in pays P2PU in effort (contributing quality assessments) in leiu of cash and P2PU is source of credential for that person. (This thought came from talking about Western Govenor's U where the student pays cash to take the assessment and earn the credential.)
Attendees:
Erin
Chloe
Alex
Nils
David
Updates/Progress/Notes:
- David's additions in the document - ideas on putting it to use
- Assessing the assessment level thinking - build that in where we can
- Laying out the workflow of the assessment lifecycle: who creates, when, how, and how it is then evaluated by the community, used, and eventually retired.
- Think about assessments from the task-level
- broader more open learning environment
- limit the validity of the assessments (in terms of against a bigger schema) and go more granular?
- i.e. sewing the button as a task
- are we assessing that or the higher level, making the dress (if you can't sew a button, you can't make a dress)
- Erin: at this point, we are looking at both - assessing at the task level and higher level (which might be the accumulation of lots of little tasks/badges)
Who is creating the assessment?
- We are and course organizers can as well - but we can set the standard by the ones we build, scaffolding we provide
How to incent the people to perform the assessments
- this is a big question
- assumptions: leverage course community, drive evangelism around particular topics/tech/skills, etc.
- How do deeper learning standards, such as problem solving fit to the assesment process we are discussing?
- Assessing participation - a need of P2PU - to encourage participation via assessment
- Does this paper focus on broader assesment outcomes, or focus on something concrete?
- ...how the outcomes of assessment degtermine the design...for example if the outcome is to validate for professional licensing, or for validating that learning has happendeed, or that someone already HAS the skills needed for a badge, or to increae and encourage participation in the community, or to drive interaction practices of the "designers-offerers" to ensure that broad outcomes like "problem solving" are achieved via many learning expriences. All of these outcomes require different assessment system design decisions.
Next Steps:
- Aiming for a rough draft by 9/7
- Reputation systems- Alex will work on a broad overview of badges, motivations, reputation economies, etc. And clean that piece up. Will email at the end of next week with a progress report.
- Nils will revisit final section (meta-assessment) as well as read through totality to make sure we are not going too far afield.
- Need to decide whether we need something about a "badge lifecycle"--it's not in there now.
- David has uploaded an assessment creation process article/report and we'll all take a look and see how it may inform this document.
- No meeting next week. Will talk again on 8/24.
++++++++++++++++++++++++++++++++++++++++++++++++++
WEEKLY MEETING
August 3 1PM ET
Attendess:
- Erin
- Alex
- David
- Chloe
- Philipp
Nils will be absent, but notes that he made progess on the Metrics of Framework Success section in the Google Doc.
Contract:
- Generally okay
- Payment schedule: One at the beginning and one at the end
- Take out monthly invoice language
Progress to date / Notes:
- updates in Google Doc
- who can create assessments? are nonexperts going to be creating assessments
- if so, how does that work? who vets it? how can we scale it?
- generic vs highly specific rubrics
- can our metadata and tags contribute to a generic rubric?
- crowdsource the rubric!
- strategies for assessments - what you want the learner to show you, do or leave behind
- student model (what does the student do) + task model (how let student know what I am looking for) + evidence model (what's the thing so they create) underneath what we are looking for
- one thing people can leave behind or do that is linked to behavior
- how much do badges matter here?
- we don't have to talk about them very much at all
- including reputation system and that type of discussion that ultimately affects the assessment
Next Steps/This Week:
- try to get further which each section
- not getting too deep - thinking about the simplicity - especially for the assessment creator
- meet next week, August 10, 1pm ET
- NOTE: cancelling the meeting on August 17th since there are multiple conflicts
++++++++++++++++++++++++++++++++++++++++++++++++++
WEEKLY MEETING
July 27 1PM ET
Present:
- Alex Halavais: alex@halavais.net, Skype: halavais
- Erin (skype: eknight21)
- David G.
- Nils P. (Skype: nils_peterson)
- Chloe (Skype: chloeatplay)
NOTES:
Framework Google Doc:
- Good staring point - helpful framing for moving forward
- Next steps: Need to build out with literature overview in each area/start basic outline:
- Section focus/responsibility for the next week:
- Open Learning Environments: Alex
- Assessment Framework: David
- Framework Success: Nils
- Make sure to put the references at the end of the doc as we go to front load some of the work
Design ideas (SoW) - where should these go
- If get down to detail far enough, can include design ideas
- Can use model we discussed before as illustrations of the framework
- Audiences for the framework paper:
- Hewlett - general theory, but also working examples
- P2PU - advancing implementation (this could be a separate doc and referenced from the main paper)
For Chloe (implementation):
- specific examples
- taxonomy of types of assessments
- she starts FT September 1st so if we can get to some specifics by then
++++++++++++++++++++++++++++++++++++++++++++++++++
SOW DESIGN PLANNING MEETING
July 25 1PM ET / 10 AM PT
Agenda:
- Badge/Assessment Pilot Review
- Review SoW strategy (+ badges)
- Quick review of current Task model (where assessments would live)
- Plans for badge integration
- Discussion/suggestions/feedback
- Are there low hanging fruit options to beef up the assessment/badge offering from within SoW
- Design questions:
- Peer assessment flow: should answers/feedback be private until the badge has been achieved?
NOTES:
Thoughts on voting/rubrics:
- When using rubrics as defined as ours - hard to roll up into No/Yes votes
- Introduction of a range instead of a yes/no (scale of 1-3)
- then make the set up an improvement focus, not just binary
- for feedback describe what you are seeing based on the language of the rubrics; or how could this person improve?
- or could do yes or no on each element of the rubric, don't have to make the summative decisions - could lead to more automated feedback
- Don't want reviewer to be in the punishing mode, so if can frame as helping mode instead
- Best feedback you can get is little critical feedback - little no's (big no is too much power/pressure)
- As simple as possible - each element of the rubric has a simple scale to feedback
- Here is what I am observing
Assessment Design:
- Currently have a 1:1 relationship between the assessment and type
- Are there situations where there should be a combination of things? i.e. some peer assessment and some guru assessment required to get the badge
- Expert buy in might be on the peer assessment feedback itself (guru validates the feedback)
- Puts a lot of weight on the expert-level (which right now are the weakest, especially the way they are set up - could lead to single minded perspective)
- Badge holder is an expert - maybe it means that you just have done the work for a particular skill, not that you are the end-all be-all
- Can move from novice to quality assessor, which says something about your skills/knowledge
Task model:
- Think its a good move (woo!)
- Assessment process needs to be much more byte-sized, maybe don't have html assessment, but instead have various tasks that roll up to those types of badges/skill demonstrations
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
KICK OFF MEETING
July 19th, 2011
Attendees:
Erin, Philipp, Chloe, Nils, David, Alex
Hewlett Grant: looking at deeper learning - how to assess and provide evidence for
- 1) think in collaborative peer learning environments, deeper learning competencies are developed, likely more than traditional settings
- 2) online, more affordances b/c of data, mining to see how relates to competencies
Deliverables:
- 1) Framework for how to deliver this stuff - deeper learning + collaborative peer learning environments, how do they relate and how can we show evidence
- enough of the literature and background to ground it, but not super heavy
- should guide how we then will implement it for SoW
- take this in a direction that is interesting to all of us
- Philipp and I holding the pen - before maternity leave, have rough draft
- Nils: more work before September 1
- Alex: discussions until Sept 1, writing Sept 1 - Oct 1 (actually, fine throughout, except probably during that first week or two of Sept)
- David: good with writing anytime
- Leave today with broadest outline - shorter is better (14 pages; 5-6 big good ideas)
- 2) Design models and ideas for SoW
- Group produces mock-ups or wireframes about what the assessments could look like
- Help understand the evaluation framework
- Timing is asap: could start with barebones as long as going in the direction that makes sense
- Review what we are doing now, and SoW plan - TASK: Erin will set up a call for next week
Hewlett's Definition of DEEPER LEARNING:
a combination of the fundamental knowledge and practical basic skills students will need to succeed in a fiercely competitive global economy. Specifically, our definition of deeper learning brings together five key elements that work in concert:
- Master core academic content
- Think critically and solve complex problems
- Work collaboratively
- Communicate effectively
- Learn how to learn (e.g., self-directed learning)
Draw in the 21st century learning - be clear that we are redefining these for this environment
SoW Design Phase
- content manager could align tasks/ideas with the 5 core pieces of deeper learning
- use the badge as the definition of what we are looking for and have assign to various tasks
- what's the best way of asking those questions - or how to braking them down into activities (taxonomy for each of the core pieces)
- katie salen paper - breaking it down and putting it back together again
- preference would always be info we have in the log but easy to write code to get other data, surveys or ethnographic work which would add texture and value
Framework Mapping:
- a) start with deeper learning
- Deeper learning is a phrase used to signify that certain aspects of schooling are currently being driven by the testing culture and the attendant learning is often measured in only a few ways; this kind of assesment and recognition of learning misses some important more complex outcomes of education. For example, collaboration is a crucial skill for success in the global workplace, but is hard to assess with a paper and pencil, or fill-in-the-bubble test; and while used as a teaching method in formal education from time to time, is rarely assessed in a way that would provide informative feedback to the learner about how to improve in this dimension.
- Review http://www.educause.edu/Resources/Browse/Deeper%20Learning/31407
- See: http://www.eschoolnews.com/2011/06/01/alliance-calls-for-deeper-learning-to-better-prepare-students/
- See: http://www.learningandteaching.info/learning/deepsurf.htm
- See: http://www.engsc.ac.uk/learning-and-teaching-theory-guide/deep-and-surface-approaches-learning
- See: http://www.west.asu.edu/nlii/learning.htm
- See: http://www.west.asu.edu/nlii/learningtable.htm#ownership
- Deeper learning items on most current lists of "21st Century Knowledge and Skills" share some characteristics that make them hard to measure with large-scale standarized bubble tests. They are complex, they are often entailed and envoked in real-world settings, and they manefest themselves as events rather than objects. By complex we mean that they are called for in situations with uncertainties, with overlapping simultaneous causal factors, and with nonlinear highly networked relationships. By events we mean "knowledge-in-action"
- One way to explore the conditions that promote deeper learning is to ask two questions of each skill 1) what does someone do when they are demonstrating x,y,z (this is the "student model" of the skill-in-action) and 2) what situations are they in when demonstrating x,y,z (this is the "task model" that provides an opportunity to perform the skill-in-action). For example, what does someone do who is proficient in critical thinking and solving complex problems? He and she...
- take several perspectives on the problem or situation and examine the problem for several sides, taking note of the benefits and limitations of each point of view
- create multiple potential solutions, then test those ideas against criteria developed to judge the worthiness of a good solution
- compare and contrast among options
- argue for one position and then argue against it
- build a model and test its limits for representing the problem and offering solution avenues
- seeks outside opinions and critique
- and so forth
- Then, what kinds of situations give rise to these sorts of behaviors
- when a problem has more than one right answer [eg. a design-type problem]
- when there are many conflicting potential outcomes of a problem and its solution
- when there are many unknowns
- etc.
- Deeper learning is involved under circumstances like these because settling upon a solution requires going beneath the surface features of the problem to grapple with underlying more nuanced decision-points and consequences (e.g. "building a bridge that will wipe out a low-income neighborhood").
- The measurement problem for these kinds of learning environments is how to sample the space of the student model, task model and build a defensible inference from the evidence. Its no wonder that large scale assessments have not been able to solve this kind of problem; but things are about to change.
- See: SMARTER balanced assessment Consortium http://www.k12.wa.us/smarter/
- See: Partnership for Assessment of Readiness for College and Careers, or PARCC http://parcconline.org/
- b) brief discussion of other frameworks and how ours fits (what we are borrowing, what we are redefining) - what does open, social, collaborative learning mean
- Is the idea here to compare learning environments, assessment and recognition systems...?
- c) fit this back to the deeper learning, relationship btwn open collab learning and deeper learning
- what do we do better with online learning? personalization, timeliness, linking to networked resources, locus of control (self-direction). Open collaborative learning is a context that could readily meet some of the criteria of the task model (above). If the workspace is such that it captures the work history of a _group_ of learners actually _collaborating_ to solve a complex problem they almost certainly will leave a trace of actions that could be understood as critical thinking e.g., evidence of the "student model." In addition to examining process, the task could be designed to have some real-world evidence on completion, eg, a product that can be examined and tested by an audience online.
- assessment, motivation, etc. Perhaps "crowd-sourcing" of the assessment. If the task were to play a Bach concerto on the violin, the student could post a YouTube playing the concerto into the community of others playing the same piece. The community could be asked for assessment.
- Assessing what someone knows involves "“a model of how students represent knowledge and develop competence in the subject domain, tasks or situations that allow one to observe students’ performance, and an interpretation method for drawing inferences from the performance evidence thus obtained” (pp. 36) of Pellegrino, J., N. Chudowsky, and R. Glaser, eds. Knowing what students know: The science and design of educational assessment. Committee on the Foundations of Assessment, Board on Testing and Assessment, Center for Education, National Research Council. 2001, National Academy Press: Washington, DC. So any kind of symbol of such an assessment (such as a badge that represents what someone knows or can do) needs to emerge from an intersection of these three opportunities.
- New assessment mechanisms and processes are arising to capture deeper learning, including technolgy enabled mechanisms that include peer assessment, automated documentation, interactive tutoring and feedback, and the social construction of signs and symbols of the meaning and value of complex knowledge and skills, including deeper learning outcomes that are best understood over time. Each of these methods offers new possibilities for promoting and validating deeper learning, but at the same time, need to be critically examined for their benefits and challenges to existing professional practices.
- d) foundation for the implementation
- from concepts to measures
- e) meta-assessment, how do you evaluate it?
- how do you measure the measures
where do badges fit in? might be its own system
need to have a name for ours
2 groups granted millions from the government to rethink assessments (for K12):
mimi's connected learning group is connected
can we hook into these guys? at least good to mention in the document
1) SMARTER balanced assessment
2) Partnership for Assessment of Readiness for College and Careers, or PARCC
TASK: everyone reach out to their network to get more info on these guys
PROCESS
- Google doc - Alex will start
- Regular weekly calls - Wednesdays 1PM (skype)to
- Separate call for SoW/assessment foundation
- Need to work on the definition of what the content person is building - designed experience (template)
- moving P2PU from the course model to the quest or task model so how we define the task/quest is really crucial (stand up to the framework but also as simple as possible)
POST-it Notes from Badges Mtg II (July 18-19)
Post-it notes were grouped by the participants into groupings, two of those groups are reproduced here as they may shed light on design questions or framework ideas.
Nils Peterson editoral comments while posting these notes in [ ]
BADGES FOR LEARNING
- How to learn from mistakes of educatational games. Build on theoritical framework instead of trying things in a hit or miss fashion
- What are the types of "power ups" that getting badges can unlock? ie. teach a topic
- Define the informal learning space where badges can play a role for identity, process, participation, achievement, etc.
- Foreground the educational outcomes over the technical whizbangs.
- How can others learn from your badges?
- What can badges tell us about who we want to be (model identities)?
- How do I see patterns in other people's careers [badge collections]? How do I learn from that?
- Is the assessment [criteria] public?
- Displaying evidence for obtaiing a badge is optional? [seeing the evidence could be useful to other learners]
- Pedagogically agnostic: but can there be values? [Possible values might be:] Language and culture, building the tools, and building the community.
- Are there different badge considerations for different ages? How can one sytem support life-long learning?
- Are we scoping badges just in the learning and EDU context?
ASSESSING BADGE SYSTEMS
This heading was also the topic of a breakout session. The original post-its were augmented with new ones and organized into a structure
Guiding Questions:
- What makes a good badge system?
- How do you know if you have a good badge system?
Responses were classified into 4 groups. Group #1 was giving higher weighting
Group #1
- Learning objectives are being met
- How are assessment criteria made public?
- Do peers learn from doing assessments?
- Does system record what learner is =NOT= good at doing?
- What to do with "failed" applications for badges
Group #2
- Is the system used for long periods in [the learner's] life?
- Does user advertise their badges in Facebook, etc?
- Do learners participate voluntarily?
Group #3
- Does the system have a user community?
- Does it have learners using it?
- Does it have robust assessors?
- Is awarding of badges automatic or does it require human judgement?
- Why will peers assess each other well? [assumes system facilitates peer assessment]
- Are badges better to mark a learning process completed or an assessment passed?
- [Does the Community reflect on the utility of the assessments?]
Group #4
- Has robust assessment instruments/ criteria
- How to assess the system without distrubing it == Portfolio==