Frequently Asked Questions

Lecture2Notes was designed to summarize lecture videos captured by a camera in the back of the room. Ideally, the lecture will contain slides that the presenter speaks about. If the video contains only a presenter speaking, then the output will be subpar. The camera can pan around (it can focus on the audience for a few seconds, for example) without causing issues. Additionally, Lecture2Notes works perfectly fine with presentations recorded via screen capture software (the components used to process back-of-room recordings will be automatically disabled). Displaying webcam output in the corner of a screen capture presentation should be fine as well, but performance might be better without the webcam displayed. Lecture2Notes can handle switching between back-of-room capture and screen capture without issues.

No, Lecture2Notes cannot detect or process handwriting. Moments where a presenter draws on a back/white board, will likely be ignored by the AI model. However, drawing on a slide will still likely appear in the slide images, but not in the slide transcript.

Lecture2Notes is in beta. Bugs and crashes are practically guaranteed. Due to error reporting technologies, we will be notified if a system fails, but there is no guarantee that any specific issue will ever be solved. See the beta page for more information.

Question not answered? Submit your question to us via our contact form or, if you understand the technologies used to create Lecture2Notes, please see the project page and the GitHub repository.