Inspired by the following exchange on Twitter, in which someone captures and posts a valuable video onto Twitter, but doesn't have the resources to easily transcribe it for the hearing-impaired, I thought it'd be fun to try out Amazon's AWS Transcribe service to help with this problem, and to see if I could do it all from the bash command-line like a Unix dork.
The instructions and code below show how to use command-line tools/scripting and Amazon's Transcribe service to transcribe the audio from online video. tl;dr: AWS Transcribe is a pretty amazing service!
tl:dr AWS Transcribe is surprisingly accurate and efficient. It took about 2 minutes for it to process a 57-second clip at a cost of less than 2.5 cents. It beats the pants off of what I remember IBM Watson was capable of doing (albeit, from a few years ago).
See the transcribed text here, and the full prettified JSON response here.
- Sign-up for Amazon Web Services: https://aws.amazon.com
- Create a S3 bucket (the example I use in my code is
data.danwin.com
And install the following tools (using homebrew, pip, and what-have-you)
- youtube-dl - for fetching video files from social media services
- awscli - for accessing various AWS services, specfically S3 (for storing the video and its processed transcription) and Transcribe
- curl - for downloading from URLs
- jq - for parsing JSON data
- ffmpeg - for media file conversion, e.g. extracting mp3 audio from video
The best way to learn-and-use the command-line is to practice the UNIX philosophy of do one thing and do it well, which requires breaking the process down into individual steps:
- Find a tweet containing a video you like
- Get that tweet's URL, e.g. https://twitter.com/JordanUhl/status/1085669288051175424
- Use youtube-dl to download the video from that tweet and save it to disk, e.g.
cardib-shutdown.mp4
- Because AWS Transcribe requires we send it an audio file, use ffmpeg to extract the audio from
cardib-shutdown.mp4
and save it tocardib-shutdown.mp3
- Because AWS Transcribe only works on audio files stored on AWS S3, we use awscli to upload
cardiob-shutdown.mp3
to an online S3 bucket, eg. http://data.danwin.com/tmp/cardib-shutdown.mp3 - Use awscli to access the AWS Transcribe API and start a transcription job
- Wait a couple of minutes, and/or use awscli to occassionally get the details of the transcription job to see if it's finished (sample response JSON from the get-transcription-job endpoint)
- Use curl to download the transcript data from the expected URL, e.g. http://data.danwin.com/cardib-shutdown.json (see pretty preview here)
- Use jq to process the transcript data and extract the
transcript
value, which contains the transcription text as a single string.
So obviously you should not do this as a big ol bash script (or even bash/CLI at all). But I wrote this example up for a talk on how you can learn the CLI by messing around for fun, and this is an elaborate example of the pain you can put yourself through. Maybe later I'll show how to approach it as a novice but this is what it looks like if you're trying to not care too much, but also not wanting it to be too painful:
# Fetch that video and save it to the working directory
# as `cardib-shutdown.mp4`
youtube-dl --output cardib-shutdown.mp4 \
https://twitter.com/JordanUhl/status/1085669288051175424
# extract the audio as a mp3 file
ffmpeg -i cardib-shutdown.mp4 \
-acodec libmp3lame cardib-shutdown.mp3
# upload the mp3 file to a S3 bucket
# (and optionally make it publicly readable)
aws s3 cp --acl public-read \
cardib-shutdown.mp3 s3://data.danwin.com/tmp/cardib-shutdown.mp3
# Start the transcription job and specify that the transcription result data
# be saved to a given bucket, e.g. data.danwin.com
aws transcribe start-transcription-job \
--language-code 'en-US' \
--media-format 'mp3' \
--transcription-job-name 'cardib-shutdown' \
--media '{"MediaFileUri": "s3://data.danwin.com/tmp/cardib-shutdown.mp3"}' \
--output-bucket-name 'data.danwin.com'
# optionally: use this to check the status of the job before attempting
# to download the transcript
aws transcribe get-transcription-job \
--transcription-job-name cardib-shutdown
# Download the JSON at the expected S3 URL, parse it with jq
# and spit it out as raw text
curl -s http://data.danwin.com/cardib-shutdown.json \
| jq '.results.transcripts[0].transcript' --raw-output
Here's what Cardi B said, according to AWS Transcribe, which you can read along with the audio or the original tweet video. I've added some paragraph breaks for easier reading, but the period/sentence-breaks are all from the AWS Transcribe service:
Hey. Yeah. I just want to remind you because there's been a little bit over three weeks, okay? It's been a little bit over three weeks. Trump is now ordering as his some missing federal government workers to go back to work without getting paid.
Now, I don't want to hear your mother focus talking about all but Obama Shut down the government for seventeen days year bitch for health care. So your grandma could check her blood pressure and your business to go take a piss in the gynecologist with no motherfucking problem.
Now, I know a lot of guys don't care because I don't work for the government or your partner. They have a job, but this shit is really fucking serious, bro. This city is crazy. Like a country is in a hell hole right now. All for fucking war. And we really need to take this serious.
I feel that we need to take some action. I don't know what type of actual base because it is not what I do, but I'm scared. This is crazy. And I really feel bad for these people. They got to go to fucking work, to not get motherfucking paid.
For convenience's sake, here's a screenshot of a transcription tweet that @JordanUhl sent out later:
The verdict? Not bad! You can see the word-by-word confidence in the full transcript JSON, but I'm impressed with the simple text output, which contains capitalization of proper nouns (e.g. "Obama") and guesses at where sentences begin, nevermind pretty good understading of Cardi B's Bronx accent. It stumbles for very fast cuss words -- "yall mother fuckas" is "your mother focus" and "check that pussy" becomes "take a piss". But it also manages to accurately transcribe fast and unusual phrases like "in the gynecologist with no motherfucking problem".
How much did it cost? AWS Transcribe charges $0.0004 per second. This clip was 57 seconds. Not counting the S3 upload/stroage fee, the price for transcription comes out to about 2.3 cents
Ran a Transcribe job on President Trump's presser today, regarding the shutdown and something about working with groceries and banks. Here's how Transcribe does with multiple speakers (e.g. Trump, and the reporter):
Plaintext of the transcription, with leading/trailing words trimmed, and the reporter's question in italics -- it's pretty good, all things considered.
Ross said that he doesn't understand. What federal workers, we help getting food. You Can you understand that?
I haven't. I haven't heard the statement, but I do understand. And perhaps you should have said it differently. Local people know who they are when they go for groceries and everything else. And I think what Wilbur is probably trying to say is that they will work along. I know banks have working along. If you have mortgages, the mortgages and mortgage, the folks collecting the interest and all of those things, they work along. And that's what happens in time like this. They know the people. They've been dealing with them for years, and they work along the grocery store. And I think that's probably what Wilbur Ross. But I haven't seen a statement, but he's done a great job, I will tell you that.
Thought it'd be worth trying the multiple-speaker identification on this other political video that's floating around Twitter today:
https://twitter.com/AuthorFarrah/status/1088565656327458817
The invocation, following the API's requirements that the max number of possible speakers be specified (I chose 4):
aws transcribe start-transcription-job --language-code 'en-US' --media-format mp3 \
--settings '{"ShowSpeakerLabels": true, "MaxSpeakerLabels": 4}' \
--transcription-job-name $FNAME \
--media "{\"MediaFileUri\": \"s3://data.danwin.com/tmp/${FNAME}.mp3\"}" \
--output-bucket-name 'data.danwin.com'
Resulting JSON output
@briankung sorry I'm dumb. didn't even read my old gist that had the Senate example. Looks like there is speaker identification #file-transcript-senate-bennett-json, but you were asking if it could be embedded with each transcribed item instead of its own object in the JSON that you then have to process/align on your own. Yeah I'd be surprised if they've changed the output format of this transcribe-job API to include that now
¯\_(ツ)_/¯