The goal of the seminar was to search for new commonalties in the fields of expert systems and intelligent tutoring systems. Since the academic research, the industrial developments and the practical applications have greatly matured the fields of expert systems and intelligent tutoring systems during the last decade, valuable insights could be achieved by pursuing such a comprehensive goal.
Originally, the two fields had relatively separate, although in many respects similar, goals and agendas. The task of developing expert systems was seen as encoding the knowledge and the competence of human experts. In a similar fashion, intelligent tutoring systems were intended to capture expert knowledge plus the added pedagogical expertise of the skilled teacher. However, the agenda of tutoring systems research first moved toward the special problems of modeling the not-yet-enlightened student and how human knowledge can be conveyed, while the work on expert systems focused on particular problems at the limits of human capability.
The presentations and discussions of the seminar focused on the knowledge that is shared between a teacher and a student, among practitioners of some field of expertise, and among various participants in complex and dynamic work and training situations in general. The seminar revealed that both expert- and instruction systems are improved by explicit representations of the objects and actors in work situations. In such situations, explanations play an important role, both in training and co-ordinated man-machine work.
While expert systems are mostly focused on the representations of the situations they are intended to address, tutoring systems are frequently tailored to the training for specific situations. As systems become applied to increasingly complex work and training situations, these differences are becoming less important.
As both fields have matured, the shared focus has thus become more important and more achievable. With training emerging as an ever-higher cost for business and industry, its automation has become increasingly valued, as has the possibility of extending human capability with machine expertise. With a more unified understanding, these systems can thus be made more useful.
Working with colleagues at the US Air Force Armstrong Laboratories, we have developed and made practical use of a job analysis technology which has been called PARI (for Precursor- Action-Results-Interpretation). In this technology, experts are first asked to pose problems to one another that they believe demonstrate the range of competences an expert in a job should have. Then, the sequence of actions taken by the expert solving each task is reviewed with him. A series of "stimulated recall" reviews probe for the expert's mental model of the situation under which each action was taken (its Precursor), the purpose of the Action itself, the expected Results, and an Interpretation of what was learned from by taking that action and noticing its results.
From this body of data, statements can be extracted about the process whereby experts represent tasks in their domain, their models of domain processes, and the goal structures that constitute their performance expertise.
From more recent work with other colleagues, notably David Hurley at Pittsburgh and Charles Bloom and Scott Wolff of US West Technologies, we know that it is possible to coach analysts as they convert the requirements statements for a software package into an object-based analysis and design. A coach that does this is being developed, which we call Sloop. We now believe it is possible to combine the PARI approach and the Sloop approach to create a complete job analysis methodology that goes from initial expert interviews to object-based specification of the job environment and expert performance knowledge. The key is to see that just as software requirements refer to both the processes inside the software and the ways in which it will be used, job analyses refer to the processes inside the expert and the work environment in which that expertise is exercised. Plans for building a job analysis coach to reflect this approach are now being refined.
The problem of student modeling in intelligent tutoring systems is often claimed to be intractable. This resulted in a shift from intelligent tutoring systems to more open learning environments. The question remains, whether student models can improve learning with these environments. To answer this question, we have developed a fairly elaborate episodic student model in the context of our LISP tutor. This student model implements a case- based reasoning approach to student modeling embedded in an elaborated help system to aid novices when learning LISP. This episodic student model (ELM) can be used advantageously to improve and individualize the cognitive diagnosis of program code and to find examples and so-called remindings.
A typology of problems is presented that is used for indexing and accessing reusable problem solving components in a library that supports the CommonKADS methodology for building knowledge-based systems. Eight types of problems, such as planning, assessment etc., are distinguished, and their dependencies are explained. These dependencies suggest that the typology is to be viewed as a "suite" rather than the usual taxonomy of "generic tasks". Developing the suite has lead to some new insights and elaborations of Newell & Simon's (1972) theory for modeling problem solving.
Recent developments in hypermedia, computer supported cooperative work (CSCW) and broadband networks open up new potentials for education and training. In many ways, these potentials resemble the ideas of three early hypertext pioneers: Vannevar Bush envisioned a device called Memex "in which an individual stores his books, records and communication, and which is mechanized so that it may be consulted with exceeding speed and flexibility." Ted Nelson proposed the docuverse which "is a structure in which the entire literature of the world is linked and forms a universal instantaneous publishing network." Doug Engelbart designed NLS "a computer-based environment containing ... documents, memos, notes and so forth but also supports planning, debugging and communication." Combining these ideas and relating them to the field of education and training suggests a learning environment consisting of (a) individual workspaces equipped with sophisticated facilities for authoring and archiving (b) a global information space which can be accessed from the individual workspaces for retrieving as well as publishing multimedia information and (c) a broadband network with synchronous and asynchronous communication facilities for linking the individual workspaces. Such an environment could be provided by a "value added service" as it is currently discussed in research on "Intelligent Broadband Services and Networks".
The learning materials provided by such a service for education and training should take the form of hypermedia courseware offering a user interface especially designed to cope with the well known problems of hypertext readers, i.e., disorientation, lack of overview, insufficient comprehension of the relations between distinguished information units and difficulties in handling browsing and navigation.
In my talk, I described SPI - an interface which explicitly addresses these issues and offers a number of facilities to support orientation, comprehension and navigation (see Hannemann, J., Thuring, M., and Haake, J.M. ,1993, Hyperdocument presentation: Facing the interface. Arbeitspapiere der GMD Nr. 784. Sankt Augustin: GMD.).
SPI uses a static screen layout that displays structural information together with its corresponding content. For this purpose, it employs a combination of graphical browsers and content windows. To facilitate navigation, SPI reduces interaction overhead by supporting several convenient ways of moving through the document, such as clicking on nodes in browsers or using a tool called "Navigator" for global navigation. Orientation is facilitated in SPI by indicating the reader's current position in the overall document structure and by visualizing options for moving back or further. Moreover, it is eased by the regular navigation semantics of the interface which maintains the structural and temporal context of the reader's current node.
Due to its tight coupling of different user interface components with a coherent hyperdocument structure it can contribute to reduce the readers' problems listed above.
One of the challenges with providing a new solution for a workplace is to create descriptions of the solution components that the different groups that are involved with the development effort understand. Furthermore, the developers must compare descriptions of the components of the new solution with descriptions of previously defined solutions. If the descriptions are similar, the developers can reuse previously defined components to refine the components of the new solution. This presentation introduces the Active Glossary. The Active Glossary is part of the Spark, Burn, FireFighter knowledge- engineering environment. It assists a development team with linking the descriptions of different components of a new solution to a vocabulary that all team members share. That is, the terms that comprise the common vocabulary are defined by their uses within a specific context. By exploiting this context information the Active Glossary can assist the development team with finding similar components.
Mutual education is needed if all the people involved in a development effort are to be able to communicate effectively. We propose "design intent" as a mechanism for this communication. All those involved in a development effort collectively agree how the to-be-developed system will interact with its users and the more global environment in which it will be fielded and simultaneously document this agreement. System specifications are then derived from this design intent. To facilitate continued discussion, the design intent is embedded in the developed system as "expectation agents". These agents monitor the system in use and are triggered when expectations are not met. These breakdowns provide an opportunity either to refine the system requirements or to educate the system users on how better to use the system.
Computer-based training (CBT) has become a major field of investigation. However, authoring and modifiability of current CBT systems remain very open issues. On the software side, object-oriented programming allows to create and maintain libraries of reusable objects. On the cognitive side, even if these objects are designed as artifact metaphors, they remain passive and rarely include user knowledge. The aim of this paper is to introduce a new view of objects (called agents) that provide a continuity between real life artifacts and artificially created artifacts. In particular, they can be adaptive, and context- sensitive. They are intended to facilitate learning-by-doing. The design of such agents is based on Schank's learning architectures as well as other important training-relevant needs such as evaluation, instructor aids, cooperation and networking. Examples illustrate the applicability of these new concepts, and a discussion is started.
Press here to continue reading abstracts.