Knowledge tracing: Modeling the acquisition of procedural knowledge

-

"This paper describes an effort to model students' changing knowledge state during skill acquisition. Students in this research are learning to write short programs with the ACT Programming Tutor (APT). APT is constructed around a production rule cognitive model of programming knowledge, called theideal student model. This model allows the tutor to solve exercises along with the student and provide assistance as necessary. As the student works, the tutor also maintains an estimate of the probability that the student has learned each of the rules in the ideal model, in a process calledknowledge tracing. The tutor presents an individualized sequence of exercises to the student based on these probability estimates until the student has ‘mastered’ each rule. The programming tutor, cognitive model and learning and performance assumptions are described. A series of studies is reviewed that examine the empirical validity of knowledge tracing and has led to modifications in the process. Currently the model is quite successful in predicting test performance. Further modifications in the modeling process are discussed that may improve performance levels."

T. Corbett, Albert & R. Anderson, John. (1994). Knowledge tracing: Modeling the acquisition of procedural knowledge. User Modeling and User-Adapted Interaction. 4. 253-278. 10.1007/BF01099821.

1. SUMMARY
The paper took the knowledge tracing approach instead of the model tracing approach in developing a tutoring system to support mastery learning. Instead of tracing a student’s path to a final (correct) conclusion, the knowledge tracing approach monitors the changing knowledge states (learned and unlearned) during “practice. goal-independent declarative knowledge and goal-oriented procedural rules.” The main assumption is that “procedural knowledge maps onto independent production rules.” The tutoring system uses a Bayesian approach to track and maintain estimated probabilities of rules. These probabilities include: (i) initial learning probability,  (ii) acquisition - “unlearned” to “learned” state,  (iii) guess - rule was applied correctly but is still in unlearned state,  (iv) slip - a mistake was made.

The paper provides four empirical evaluations (general, internal, dynamic estimation of individual differences, assessment) of knowledge tracing - trying to see if it works and how accurate it will be in predicting students’ performances. The paper concludes that a better tutoring system should: (1) model a sufficient sets of production rules (2) model differences in rule difficulty, and finally (3) model differences in students’ performances.
2. STRENGTHS
The paper has rigor evaluation strategies (4) to demonstrate a new approach (at the time) to build effective tutoring systems. Choosing LISP language learning is an excellent example since the associated knowledge domain is quite unique, probably limits the chances of correct guessing.
3. WEAKNESSES
The paper recognized the complexities of estimating test performances which are largely due to the interpretations of the slip probabilities. The weight of evidences (in case we have multiple properties).

Image credit: the Hiking Artist