Jain Lab
  • Home
  • People
  • Projects
  • Courses
  • Publications/Downloads
  • Lab Login
Learning to Predict Temporal Interestingness for Videos | Jainlab

Learning to Predict Temporal Interestingness for Videos

Summary

The major goal of this project is to leverage eye-tracking data, including gaze positions and pupil diameter changes, as an implicit annotation for video interesting-ness prediction. Eye tracking users as they watch videos provides a stream of rich data that contains patterns of attention and emotion. Gaze positions reveal where users focused their attention, and pupil diameter changes index their arousal state. Toward the major goal, the research funded by this award has developed pupillary light response models to account for brightness related pupil diameter changes and methods to measure and visualize interesting regions in images and videos, including omnidirectional 360 content. As the premise of the project is abundant availability of eye tracking data, the activities supported by this award have additionally investigated security and privacy threats and associated mitigations for eye tracking data.

Details

Award ID: 1566481
PI: Eakta Jain
Award Duration: 2016-2020
Award Amount: $174,674

Related Publications

  1. The Security-Utility Trade-off for Iris Authentication and Eye Animation for Social Virtual Avatars. Brendan John, Sanjeev Koppal, Sophie Joerg, Eakta Jain. (2020) IEEE Transactions on Visualization and Computer Graphics (TVCG)
  2. Let It Snow: Adding pixel noise to protect the user’s identity, Brendan John, Ao Liu, Lirong Xia, Sanjeev Koppal, Eakta Jain. (2020, in press). ACM Symposium on Eye Tracking Research and Applications (ETRA) Adjunct Proceedings. Workshop on Privacy and Ethics in Eye Tracking (PrEThics)
  3. Look Out! A Design Framework for Safety Training Systems and A Case Study on Omnidirectional Cinemagraphs, Brendan John, Sri Kalyanaraman and Eakta Jain. (2020) IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VR Workshop "TrainingXR")
  4. A Benchmark of Four Methods for Generating 360° Saliency Maps from Eye Tracking Data. Brendan John, Olivier Le Meur, Eakta Jain (2019) International Journal of Semantic Computing, Volume 13, Number 3.
  5. Using Audience Physiology to Assess Engaging Conservation Messages and Animal Taxa. Eakta Jain, Susan Jacobson, Pallavi Raiturkar, Nia Morales, Archana Nagarajan, Beida Chen, Naveen Sivasubramanian, Kartik Chaturvedi, and Andrew Lee (2019) Society & Natural Resources.
  6. Differential Privacy for Eye Tracking Data, Ao Liu, Lirong Xia, Andrew Duchowski, Reynold Bailey, Kenneth Holmqvist and Eakta Jain (2019) Proceedings of ACM Symposium on Eye Tracking Research & Applications (ETRA).
  7. EyeVEIL: Degrading Iris Authentication in Eye Tracking Headsets Brendan JohnG, Koppal, S. and Eakta Jain (2019) Proceedings of ACM Symposium on Eye Tracking Research & Applications (ETRA).
  8. An Evaluation of Pupillary Light Reflex Models for 2D Screens and VR HMDs, John, B., Raiturkar, P., Banerjee, A., Eakta Jain (2018) ACM Symposium on Virtual Reality Systems and Technology (VRST).
  9. A Preliminary Benchmark of Four Methods to Generate 360 Saliency Maps, John, B., Raiturkar, P., Le Meur, O., Jain, E. (2018) First International Conference on Artificial Intelligence and Virtual Reality (AIVR). [This paper was among the top 10% papers which were then invited to be extended and submitted to an IJSC special issue.]
  10. DeepComics: Saliency estimation for comics, Kevin Bannier, Eakta Jain, Olivier Le Meur. (2018). ACM Symposium on Eye Tracking Research & Applications (ETRA).
  11. Identifying Computer-Generated Faces: An Eye Tracking Study, Pallavi Raiturkar, Hany Farid, Eakta Jain. (2018), University of Florida Technical Report IR00010525.

Datasets (with documentation)

  • Gaze & Pupil Diameter II
    • Code: Here
    • IRB: 201602528
    • Data: Here
  • Gaze & Pupil Diameter III
    • Code: Here
    • IRB: 201602528
    • Data: Here

Acknowledgements

This material is based upon work supported wholly or in part by the National Science Foundation under Grant No. 1566481.

Disclaimer

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.