Captions in online learning at CSU

Captioning is not only important for those who are deaf or hard of hearing, but anyone who prefers to view media with limited or no noise due to their personal preferences or due to their immediate environment i.e. in the library, on public transport or in an office environment.  It can also assist where the spoken dialogue in the video may not be the listener’s or speaker’s first native language and it can help to clarify terminology and assists to improve overall comprehension.  Furthermore, once captions have been created, videos can be searched using SEO (Search Engine Optimisation).

The most common types of captioning are:

  • Automatic captions – these are automatically generated using speech recognition technology eg. YouTube or Panopto. These are around 60 – 70% accurate and it is important to note that editing of these captions is therefore frequently required (if you are the owner of the video and have editor/admin rights for the video on that platform)
  • Manually created closed captions – correctly transcribed text as well as other descriptive sounds eg. background noises or other audio cues, usually 99% accurate
  • Word transcription – this is when the sound (such as the lecturer’s presentation) within a video is instead converted to written record which can be used as as stand alone artefact (or in conjunction with the original media

Best Practices for creating quality captions

  • Use Sans Serif fonts ( Arial, Verdana or Calibri)
  • No more than two lines of text on screen
  • White text with black outline or background
  • For multiple speakers, consider using names to identify e.g.
    (Lecturer) What do you think?
    (Student) I think this is great
  • Ensure captions are synchronised with spoken words

Automatic Creation of Captions for Audio/Video content

There are several ways in which captions can be created automatically, and these can give the overall process of providing accessible resources greater efficiency. The most relevant platforms for doing this are listed below. There are however some aspects which must be kept in mind. Firstly, the accuracy of speech to text recognition of platforms such as youtube can be as low as 60% to as high as 90% and this accuracy can be impacted by aspects such as ambient noise, speaker’s tone/pitch/pace, quality of recording equipment, accents and subject matter. Although Artificial Intelligence and Machine Learning approaches are rapidly impacting upon and improving automated speech recognition, you need to be prepared to review and possibly edit captions in the videos you’ve created. If you’re embedding youtube videos into your Interact2 site which don’t belong to you, encouraging students to turn on the closed captions is and providing written module commentary on the video is a good starting point, however if the resource is critical you may need to investigate additional measures to ensure accessibility and equity is maintained.

Including and editing captions for CSU Replay (Panopto)

https://support.panopto.com/s/article/ASR-Generated-Captions

YouTube Editor

YouTube Editor gives you the choice of generating automatic captions, uploading a transcript file or creating your own captions.  Once an account has been created, an MP4 can be uploaded, edited and then published.

Using Dragon Naturally Speaking software

Dragon speech recognition software can be used to create captions in YouTube or a Word document and in other software players.  The video is played through dictation headphones and then spoken aloud, creating a speech to text transcription.  Platforms such as YouTube will then set the timings to automatically line up with the text.

Word Transcriptions 

Many students prefer a Word transcription of audio material. So rather than captions being included within the original media (such as video with captions), the content is provided as a written transcription in a Word document. This is also beneficial for persons using screen readers, highlighting important details or to print out a hard copy.  Express Scribe transcription software player is available to download for free, it can also be used with a supported foot pedal to increase typing productivity.

Adobe Connect Meetings

Live captioning can be carried out for Adobe Connect meeting using the services of a Captioner/Stenographer or other Captioning Service, eg. AI Media.  To prepare captions of recorded meetings an offline MP4 recording is required and academics can prepare these by following a few simple instructions. Follow this link for creating Adobe Connect offline MP4 recordings.

Note:  The Disability Service is able to provide specialised transcription for students registered with the service.  Contact CSU Disability Service

 

Leave a Reply

Your email address will not be published. Required fields are marked *