The Mymory Project
Menu

GenerationOfMultiModalAttentionEvidence

Difference between version 2 and version 1:
At line 12 added 1 line.
At line 14 added 1 line.
At line 16 added 1 line.
At line 18 added 10 lines.
**The user’s ''attentional state & focus'' is estimated
***using webcam and eye tracking technology.
** The ''text work recognition'' module
*** uses eye tracking + visible text area + scroll + mouse events.
*** classifies visible text passages: read / skimmed / unread / …
** ''Interaction devices'' explicitly provide attention data
*** e.g., usage of a digital stylus to highlight interesting text passages
** Connections to the ''physical environment'' come into reach
*** e.g., recognition of bar codes / RFID tags on books

Back to GenerationOfMultiModalAttentionEvidence, or to the Page History.