Feedback Research
October 30, 2012, Vanessa Gennarelli
Research is up on Dropbox: https://www.dropbox.com/home/Public/Feedback%20Research
http://www.ted.com/talks/daphne_koller_what_we_re_learning_from_online_education.html
Findings and suggestions:
- We haven't done enough to reward people for giving feedback--their average score reported, how many reviews they've done, last review they've left (to encourage people to continue to leave reviews)
- Train for suggestive and reinforcing feedback--it both improves the project and keeps folks engaged
- We should connect and sustain the interaction between feedback giver and receiver--it has pedagogical value for learners to reframe their ideas, and for iteration
- I've disparaged voting up in the past, but the voting up of good, solid feedback can help the learner direct their efforts towards what counsel to listen to.
- Feedback will trend toward the positive (see examples from ebay)--how can we finesse the positivity bias into reinforcing feedback?
- What do we think of course organizers rating n00b's comments?
- A design that supports iteration--what do we think about a "Projects" tab and an "Open Questions" tab? How could those features work together to support getting the feedback you need quickly and show off the projects you do/are proud of/elicit the sort of reinforcing feedback that keeps folks engaged?
Questions:
- How do we define "feedback" as different from conversation?
- What makes people give feedback?
- In online communities, many of the incentives that work in traditional institutional education fall away.
- What are the differenet types of feedback and which are we most interested in?
- "Applaus" (deviantart, pinterest, etc.)
- Who gets feedback? (What's the distribution of feedback?)
- E.g. deviantart, popular items get a lot of feedback, most items get no feedback
- Challenge: scaling the expert feedback
- Critique - the value of hard critique, see Paul Tough's book
- Is there value in observing feedback that others get for their work? (The studio model)
Notes
Tseng & Tsai:
- "The peer assessment activities consisted of three rounds, and each of the students acted as an author and a reviewer. The scores determined by the learning peers were highly correlated with those marked by the experts, indicating that peer assessment in high school could be perceived as a valid assessment method."
- 184 10th-grade students (16-year-olds) from four different classes in a school in Taiwan.
- 3 round model: http://cl.ly/KYEM
- 4 types of feedback
- Reinforcing: positively expressed feedback
- Didactic: lecture tone
- Suggestive: hints
- Corrective: this is wrong
- Each type had an impact on student performance:
- Suggestive feedback was positively correlated with their performance in all dimensions (r = 0.18, 0.25 and 0.21 for Creativity, Relevance, and Feasibility, respectively, p < 0.05)
- Reinforcing feedback of the first round was positively correlated with students’ scores on the three dimensions of the second (r = 0.38, 0.49 and 0.38 for Creativity, Relevance, and Feasibility, respectively, p < 0.01)
- These findings suggested that Reinforcing and Suggestive feedback should be constructive in students’ develop- ment of their work, while feedback of lengthy explanation with a didactic tone produced negative relationships to students’ project performance.
- http://cl.ly/KY7K
Fan:
- Address cheating and fraud in reputation management on the internet
- Ebay sellers: A numerical rating is associated with the comment, indicating whether the comment is positive (+1), negative (-1), or neutral (0). But "In [22], empirical results show that eBay’s system does not provide sustained incentives for reputable sellers to provide honest quality over time."
- 2 models to keep folks honest--
- Average rating
- Cumulative rating
- "Under both mechanisms, sellers will lose incentives when their reputation scores are high enough and the transaction history is long enough."
- Recommendations--include recency of review, make it difficult for folks to change identity
Preece:
- Evaluates online communities in terms of:
- Sociability: number of participants in a community, the number of messages per unit of time, members’ satisfaction, and some less obvious measures such as amount of reciprocity, the number of on-topic messages, trustworthiness
- Usability: numbers of errors, productivity, user satisfaction
- Reciprocity important to sociability--every feedback feature should prompt more feedback (ie, if you get reviewed, feature should prompt you to review someone else)
Forte:
- 45 students used wikipedia to improve writing
- Fear of submission for peer review--my work isn't good enough
- Peer review makes a learner aware of an audience & engage the wider public
- Prompts learners to have strategies for their ideas
- "It appears that affective response to others’ views was what influenced their writing, especially in the case of the feminist. Her learning experience was only obtainable through direct questioning."
Dellarocas:
- Ebay:
- * Buyers left feedback on sellers 52.1% of the time; sellers on buyers 60.6% of the time.
- * Feedback is overwhelmingly positive; of feed- back provided by buyers, 99.1% of comments were positive, 0.6%were negative, and 0.3%were neutral.
- "In general, reputation effects benefit the most patient player in the game: The player who has the longest time horizon (discounts future payoffs less) is usually the one who is able to reap the benefits of reputation."
Lin:
In this study, the web-based peer assessment was a two-stage compulsory evaluation that partially substituted for teacher assessment. During the process, a student submitted assignments in HTML format which were then anonymously uploaded through a web-based peer assessment system, named Networked Peer
Lampe:
- "A further method of shaping new user behavior is the use of feedback provided by the larger community, often in the form of rating systems that provide evaluations of new contributions."
- "Slashdot has developed a system of distributed moderation by which experienced members of the site provide feedback in the form of ratings about the quality of comments posted to its discussion forums."
- "Butler [2] similarly found that more active listserv’s not only had more users entering the discussion, but that they lost users at a greater rate than smaller structures."
- "Each posted comment message has a current score, from –1 to +5. Upon reading a comment, a moderator can expend a point in order to raise or lower the comment’s score by 1. Users choose from a list of descriptors for the comments, such as “Off-topic”, “Troll”, “Insightful”, “Funny”, or “Overrated”, with each comment type carrying with it an inherent -1 or +1 moderation."
- "New users who received no moderation were less likely to make a second comment than users who received either positive or negative initial feedback through moderation. Even when a user receives feedback on their first comment, lack of feedback on the second is associated with approximately 30% of users to ceasing commenting."
- "There is some indication that receiving two negative moderations in a row make it unlikely that a user will receive a positive moderation."
REFERENCES
Dellarocas, C. (2003). The digitization of word of mouth: Promise and challenges of online feedback mechanisms. Management science, 49(10), 1407-1424.
Forte, A., & Bruckman, A. (2006, June). From Wikipedia to the classroom: exploring online publication and learning. In Proceedings of the 7th international conference on Learning sciences (pp. 182-188). International Society of the Learning Sciences.
Lampe, C., & Johnston, E. (2005, November). Follow the (slash) dot: effects of feedback on new members in an online community. In Proceedings of the 2005 international ACM SIGGROUP conference on Supporting group work (pp. 11-20). ACM.
Ming Fan, Yong Tan, and Andrew B. Whinston (2005). "Evaluation and Design of Online Cooperative Feedback Mechanisms for Reputation Management." IEEE Transactions on Knowledge and Data Engineering. 17.3.
Preece, J. (2001). Sociability and usability in online communities: Determining and measuring success. Behaviour & Information Technology, 20(5), 347-356.
Scardamalia, M. (2004). CSILE/Knowledge Forum®. In Education and technology: An encyclopedia (pp. 183-192). Santa Barbara: ABC-CLIO.
Sheng-Chau Tseng, Chin-Chung Tsai (2007). "On-line peer assessment and the role of the peer feedback: A study of high school computer course." Computers & Education. 49: 1161–1174.