Multi-modal cognitive language model / Spring 2019 - Master Thesis
There are some available resources of recorded human signals while people are reading, e.g. various eye-tracking [1,2,3], EEG  and fMRI  datasets. The goal of this project is to bring these resources of language understanding together, learn representations for each of them and combine them into a multi-modal language model. Such a cognitive language model is expected to have some advantages compared to standard neural language models . These advantages and the quality of a cognitive language model are to be explored.
Supervised by Nora Hollenstein
-  Cop, U., Dirix, N., Drieghe, D., & Duyck, W. (2017). Presenting GECO: An eye-tracking corpus of monolingual and bilingual sentence reading. Behavior research methods, 49(2).
-  Kennedy, A., Pynte, J., Murray, W. S., & Paul, S. A. (2013). Frequency and predictability effects in the Dundee Corpus: An eye movement analysis. The Quarterly Journal of Experimental Psychology, 66(3).
-  Papoutsaki, A., Gokaslan, A., Tompkin, J., He, Y., & Huang, J. (2018, June). The eye of the typer: a benchmark and analysis of gaze behavior during typing. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications.
-  Hollenstein N., Rotsztejn J., Troendle M., Pedroni A., Zhang C. and Langer N. “ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading.” Scientific Data. 2018.
-  Wehbe, L., Murphy, B., Talukdar, P., Fyshe, A., Ramdas, A., & Mitchell, T. (2014). Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses. PloS one, 9(11).
-  Kiros, R., Salakhutdinov, R., & Zemel, R. (2014, January). Multimodal neural language models. In International Conference on Machine Learning.