The Mymory Project
Menu

GenerationOfMultiModalAttentionEvidence

Difference between version 4 and version 1:
At line 3 changed 1 line.
[{ALLOW edit elst, schwarz, lauer}]
[{ALLOW edit elst, schwarz, lauer, buscher}]
At line 12 added 1 line.
At line 14 added 1 line.
At line 16 added 1 line.
At line 18 added 10 lines.
**The user’s ''attentional state & focus'' is estimated
***using webcam and eye tracking technology.
** The ''text work recognition'' module
*** uses eye tracking + visible text area + scroll + mouse events.
*** classifies visible text passages: read / skimmed / unread / …
** ''Interaction devices'' explicitly provide attention data
*** e.g., usage of a digital stylus to highlight interesting text passages
** Connections to the ''physical environment'' come into reach
*** e.g., recognition of bar codes / RFID tags on books
At line 16 changed 1 line.
__''My''mory will exploit multiple sources of evidence for assessing user attention.__
__''My''mory exploits multiple sources of evidence for assessing user attention.__
[{Image src='images/Mymory_attention.png' align='center' width='100%' alt='Attention' caption='Attention Generation from Multiple Sources' border='0' style='font-size: 120%; color: green;'}]
At line 33 added 1 line.

Back to GenerationOfMultiModalAttentionEvidence, or to the Page History.