My Top Essential Plover Commands

Plover is the world’s first free and open-source stenography software. But unlike traditional computer-aided (machine-shorthand) translation systems, which typically resemble sandboxed word processors, Plover lets you use your steno machine as an OS controller/input device. Each has its own user base and use cases and I’m not saying one is inherently better or worse than the other. They fundamentally serve different audiences. However, one of the benefits of Plover is that it unlocks a whole new world of single-stroke commands one can use to control their machine. Here’s my list of the top commands that I use nearly every day. Keep in mind that I use macOS. If you use Windows, for most commands, you can just substitute “ctrl” for when I use “command” and you’ll be good to go. An exhaustive list of all the commands Plover has to offer is available here.

1. Switching workspaces

When you have several workspaces on macOS and you want to switch between them quickly.

SKR-R = {#control(left)}

SKR-G = {#control(right)}

“switch to the workspace to the right” (mnemonic hook: “screen-⬅”)

“switch to the workspace to the left” (mnemonic hook: “screen-➡”)

*Think of the RPBG keys as standard keyboard arrow keys.

2. Go to the end/beginning of the document

Useful for when you’re making edits while captioning and the speaker starts talking again, or you just want to snap to the bottom of the document you’re working on.

SR-RZ =  {#command(down)}

SR-FD = {#command(up)}

3. Git commands

TKPWHRA*UL (git add [all]) = {>}git add .

TKPWEUPLT = {>}git commit

TKPW*EUPLT = {>}git commit -m “{^}{-|}

TKPWHRURP = {>}git pull origin master

TKPWHRURB =  {>}git push origin master

TKPWAUTS = {>}git status

4. One-stroke money formatters

When people say sums of money in ways that make you stop and think where to put the commas and periods, do these instead.

TKHR-RS = {*($c)}

“Turn the previous number into a correctly-formatted sum of money.”

3 + TKHR-RS becomes $3

4.5 + TKHR-RS becomes $4.50

.33 + TKHR-RS becomes $0.33

THO*UDZ = {^}000{*($c)}

“Turn the previous number into thousands of dollars.”

4 + THO*UDZ becomes $4,000

200 + THO*UDZ becomes $200,000

PH*LDZ = {^}{#option(left)}${#option(right)}million

“Turn the previous number into millions of dollars.”

7 + PH*LDZ = $7 million

*Use the same paradigm to do trillions of dollars (TR*LDZ) and billions of dollars (PW*LDZ).

5. Start lower with no space, start upper with no space.

When you want to suppress automatic spaces and want to control the case of the first letter of the output.

KPA* = {^}{-|}

“Start text where my cursor is currently located and capitalize the first letter.”

STHROER = {^}{>}

“Start text where my cursor is currently located and make sure the first letter is lowercased.

6. Special characters that need to be escaped.

Certain characters that are part of Plover’s dictionary syntax need to be escaped correctly to define them so that Plover doesn’t get confused and misinterpret the entry.

KHRURB (curly brace open) = \{{^}

{

KHRURBS (curly brace close) = {^}\}

}

PWHR*RB (backslash) = {^\^}

7. Comment slashes

\

KPHERBS (comment slashes) = {^}//

// (hi)

KPH*ERBS (comment slashes + cap next) = {^}//{-|}

// (Hi)

TAEUBL/TPHR-P = (╯°□{#Shift_L}°)╯︵ ┻━┻

 (╯°□ °)╯︵ ┻━┻

*Note the required {#Shift_L} to prevent special-character encoding weirdness.

SKWHR*UG = ¯\_(ツ{#Shift_L}{^})_/¯

¯\_(ツ)_/¯

*Note the required {#Shift_L} to prevent special-character encoding weirdness.

8. Brightness up/down

PWR*P = {#MonBrightnessUp}

PWR*B = {#MonBrightnessDown}

*Again, think of the P and B as up and down on a standard keyboard arrow set.

9. Sound controls

SRAO*UP (volume up) = {#AudioRaiseVolume}

SRAO*UB (volume down) = {#AudioLowerVolume}

SRAO*UPLT (volume mute) = {#AudioMute}

10. On-the-fly speaker definition

For when you didn’t create a speaker definition for someone beforehand but you want to still mark them by their name in the transcript rather than using chevrons (>>).

SPOEU = {*<}{^:}{#Alt_L(Left)}{^\n\n^}{#command(right)}{^ ^}{-|}

Inserts a new line, puts the last name (or any word that comes prior to the stroke) in all caps, adds a colon, and capitalizes the next character.

“Josh” becomes \nJOSH:(capitalize next)

11. Go to end of line & add a semicolon

TKHRAO*EUPB (go to end of line) = {#command(right)}

SKHROEUPB (add semicolon to end of line — used frequently in writing JavaScript) = {#command(right)}{;}

12. CamelCase

KPW-PBG (cap next word and attach it) = {^}{-|}

set KPW-PBG attribute becomes setAttribute

KPHAO*EL (cap the first word and the next word, attach them) = {*-|}{^}{-|}

word KPHAO*EL press becomes WordPress\

13. Add comma at end of line, make new line

Handy for writing JavaScript objects or working with JSON.

KPHAPBL (comma + new line) = {#right}{,}{^\n^}{^}

14.  Indented parentheticals

 Used for captioning to mark things like laughter or applause.

KHRAFT (laughter) = {^\n^}{^\n^}{^\t^}{^}[ Laughter ]{-|}

Adds a new line, indents one tab and adds [ Laughter ].

15. Dot-slash

Used for pointing to the current working directory in bash.

TKHR*RB (dot slash) = ./{^}

16. “S” is for “sticky (parentheses & brackets)

Useful for writing function declarations and array pointers.

SPR-PB = {^}({^}

SPR-PBD (paren, double) = {^}()

SPWR*BG = {^}[{^}

So these are a few of the most important ones that I use for now. I’ll keep adding to this list as I think of more!

Hacking Music Accessibility Via Realtime Stenography (and Code): Recap

This March, I got to present at the NYC Monthly Music Hackathon about my experiments using stenography and coding to enhance accessibility of auditory information for those with hearing loss who use real-time captioning as a form of accommodation.

Photo of Stanley in front of an audience. On the left side there's a screen with the title slide of his talk and on the right is Stanley standing in front of another screen, to which he's transcribing himself.
If you think that looks like I’m transcribing myself, you would be correct!

Problem

While sign language interpreters can use their body and expressive, animated signing to convey the tone, rhythm, and various other qualities of music, real-time captioners are comparatively limited in their expressive capabilities. Deaf attendees who primarily use spoken language typically must rely on a combination of their residual hearing, vibrations, cues from the performer or audience members, and parenthetical descriptions inserted by the captioner to piece together a mental representation of what’s being heard.

Here’s a short clip of what live captioning looks like in action:

Despite having provided real-time captioning services for almost six years now, this was something I hadn’t considered at all until I was hired by Spotify to caption their February Monthly Music Hackathon. A deaf attendee, Jay Zimmerman, approached me and requested that I describe the audio segments during the demo portion as best I can. As a musician who had lost his hearing later in life, Jay wanted and appreciated a greater level of access than the simple music note, as often seen in TV closed captions, or simply the lyrics on screen. So I did the best that I could:

 On screen, it shows a bunch of dialogue and then parentheticals for [ Big Room House/EDM ] and [ Tiësto, Martin Garrix - "The Only Way Is Up" ].
Stanley’s captioning screen showing captions from Spotify’s Monthly Music Hackathon event.
For me, this wasn’t good enough. Even when the captioner knows a song well enough to nail the title, it still feels lacking. It just isn’t a suitable accommodation for someone who truly wants to experience the acoustic information in a more direct way. There’s just too much information to convey to sum it up in short parentheticals. Additionally, when I am captioning, I’ve found it difficult to come up with accurate and succinct descriptions on the spot when things are often happening very quickly in live settings.

  Brainstorming and Thought Process

A while back, I wrote a post on my project, Aloft. In short, it’s a cloud based caption delivery app which enables participants to access stenographers’ live transcription output on any Internet-enabled device with a modern browser. Its main use case was originally for those participating in events or lectures remotely:

Screenshot of a remote participant watching the keynote address at WordCamp US Nashville depicting a PowerPoint slide and the realtime captions underneath.
Example of Aloft being used by a remote conference participant.

But I actually use Aloft in all of my captioning work, including those that are on site where I hook up to a projector, drag the participant view page over to the monitor, and display it full screened like so:

Photo depicts a man in a wheelchair giving a talk while a full-screen view of live transcription is visible behind him.
Example of Aloft being used on-site at the March NYC Monthly Music Hackathon. Speaker: Patrick Anderson.
Image of a large audience at a tech conference watching the keynote with live captions above the presentation slides.
Example of using Aloft for large-event captioning at WordCamp US 2017.

Well, one of the great benefits of working with software you wrote yourself is that, as you encounter different needs, you are free to modify your tools to meet them. In fact, one of the reasons I created Aloft was because of how anoyingly restrictive and unintuitive commercial legal-transcription viewing platforms typically are with respect to seemingly simple things like being able to change the font size or the color palette.

So that got me thinking…

What if we could simultaneously convey the transcription of spoken content in addition to some visual representation of the audio?

Initial Research

At first, I didn’t know where to start since, at the time, I hadn’t really dealt with computer audio processing at all so I began with researching different JavaScript packages (since it needed to be web compatible and I didn’t really want to deal with having to run code through an interpreter). I came across a package called p5.js, which a lot of the folks at the hackathon seemed to have either heard about already, or used in the past. The Frequency Spectrum add-on seemed pretty promising.

 

Screenshot depicting the demo page for the Frequency Spectrum add-on. A frequency spectrogram (squiggly lines on a light gray background) and some JavaScript code beneath are visible.
P5’s Frequency Spectrum add-on

Implementation

So I created it a special route in Express so it would only live on a page that I could point the browser to only during the occasions I need it. We don’t want all those extra scripts loading when I’m doing just normal captioning!

app.get('/soundbar/:user/:event', function(req, res) {
	Event.findOne({'user': req.params.user, 'event': req.params.event},
	function(err, event) {
		if(err) {
			throw err;
		} else {
			res.render('watch-soundbar', {
				'user': req.params.user,
				'event': req.params.event,
				'marker': req.params.marker
			});
		}
	});
});

So now, I would I would load the libraries only if I went to localhost:4000/soundbar/stanley/monthly-music-hackathon.

Then, it was just a matter of importing the libraries into the view, and initiating p5 via the code provided on the demo page with some minor modifications.

var mic, fft;

function setup() {
	let canvas = createCanvas(windowWidth, windowWidth/5);
	canvas.parent('waveform');

	noFill();

	mic = new p5.AudioIn();
	mic.start();
	mic.amp(10);
	fft = new p5.FFT();
	fft.setInput(mic);
}

function draw() {
	background('#{bgColor}');
	stroke('#{fontColor}');
	strokeWeight(3);

	var spectrum = fft.analyze();

	beginShape();

	for (i = 0; i<spectrum.length; i++) {
		vertex(i, map(spectrum[i], 0, 255, height, 100) );
	}

	endShape();
}

function windowResized() {
	resizeCanvas(windowWidth, 250);
}

A few changes I made were: I modified the canvas width so that it automatically calculates the height and width based on the parent window, so it would always be proportional to the size of the size of the captioning screen. Since we don’t know what the foreground and background colors are, as they’re dynamically set by an Angular service, I passed those in as variables that are pulled from local storage via an Angular service, and sent to the controller. Finally, I passed in the div to which it should display and we have a working audio waveform at the bottom of the screen!

 Screenshot of a web page displaying real-time transcriptions with a squiggly-line audio frequency spectrum displayed at the bottom.
View of frequency spectrum at the bottom of the Aloft viewer page.

But, wait a minute. What if I don’t want to see the “soundbar” all the time? What if I only want it to be displayed when appropriate, as in during a live musical demo or performance? Well, why don’t I create a command that I can invoke from the steno machine that would turn it on and off? So I arbitrarily decided that &SOUNDBAR would be the flag I would use to turn the soundbar on and implemented it like so:

// If the number of &SOUNDBARs is even, toggle between show and hide.

if ((text.match(/&SOUNDBAR/g) || []).length % 2 === 0)  {
	soundbar.style.display = 'none';
	captionArea.style.paddingBottom = '4rem';
} else {
	soundbar.style.display = 'inherit';
	captionArea.style.paddingBottom = '0rem';
}

So essentially, every time the backend receives text, before it modifies the DOM, it uses a regular expression to check to see whether the sequence of characters matches &SOUNDBAR, and if the number of &SOUNDBARs present in the transcript is odd, then display. If it’s even, hide it. Probably not the most optimal solution in that the microphone is activated and also performing the calculations in the background whether or not the soundbar is visible, but it works for now.

Now, the soundbar remains hidden until I choose to show it by writing SPWA*R on my steno machine, which spits out &SOUNDBAR in the transcript. As soon as I don’t need it anymore, I write SPWA*R again, the number of soundbar flags on the page is now even, and it hides the div.

 Screenshot of two screens, one of the captioner's screen where you can see the captioner wrote "&SOUNDBAR" to flag Aloft to open the soundbar. On the second screenshot, you can see the squiggly frequency spectrum "soundbar" displayed.
How the &SOUNDBAR flag is used by the captioner to display the frequency spectrum.

An obvious drawback is that if the consumer wants a copy of the transcript, I would have to go back and delete all the occurrences of &SOUNDBAR but that’s a problem to solve for another day. Another problem is that p5 is always running in the background even when the div is hidden. This is a limitation of package itself, unfortunately, since I’ve encountered trouble loading it via some event handler as opposed to how it was designed to work: on page load.

A limitation that Jay pointed out was that, while the soundbar solution is great for visualizing instantaneous peaks, what’s loud or quiet at the current moment. It’s relatively poor at visualizing temporally, what’s going on. So I played around with another possible solution, this time using a different Web Audio-based package called Awesome Realtime Audio Visualizer in JS, HTML created by Will Brickner. His audio visualization spectrogram conveys the frequency on the vertical axis, time on the horizontal axis, and intensity through pixel brightness. When the visualizer is activated, a yellow line scans across the screen from left to right, drawing a spectrogram in real-time, giving one more of a temporal idea of the auditory content:

Screenshot of a captioning window with a dynamic audio spectrogram as the complete background.
Demo of spectrogram with “Crowd Control” – Fisher playing in the background.

In contrast to the soundbar, this visualizer takes up the whole screen and becomes the entire page’s background. In the view, it lives as an HTML canvas object. As with the soundbar example, this implementation also has its own route (/spectrogram) and it also allows the captioner to turn the audio visualizer on and off depending on need.

Final Thoughts and Next Steps

The biggest challenge is really being able to convey enough depth to make the experience as immersive and inclusive as possible. While these two solutions are a great start, perhaps combining them somehow into one page would let the user benefit from both formats. Or better yet, incorporate a more-complex visualizer to allow for more bandwidth. As Jay puts it:

Both give a sense of amplitude and pitch, with amplitude being easier to perceive so we can see when sound is happening and whether or not it is soft or loud.

The issue with frequency/pitch information is how to make it clear and easy to decipher, which leads to linear vs exponential or logarithmic graphing.

Easiest way to comprehend pitch/frequency quickly is on the vertical, so high pitches are high and low pitches are low.

So what if we could harness a different axis like color to indicate volume, leaving the y-axis free for pitch and the x-axis to denote time? What if instead of a line or a sequence of bars, we had a 3D object that morphs and changes in size in response to different volumes or frequencies? But then that leads to the question: at what level of complexity would the visuals negatively affect the legibility of the captioning?

Aloft!

First off, apologies for the long radio silence. It’s been far too long since I’ve made any updates! But I just had to share a recent that I’m currently probably the most excited about.

[ Scroll down for TL;DR ]

To begin, a little background. For the past several months, I’ve been captioning an intensive web design class at General Assembly, a coding academy in New York City. Our class in particular utilized multiple forms of accommodation with four students using realtime captioning or sign interpreters depending on the context, as well as one student who used screen sharing to magnify and better view content projected in the classroom. A big shout-out to GA for stepping up and making a11y a priority, no questions asked.  

On the realtime captioning front, it initially proved to be somewhat of a logistical challenge mostly because the system my colleague and I were using to deliver captions is commercial deposition software, designed primarily with judicial reporting in mind. But the system proved to be less than ideal for this specific context for a number of reasons.

  1. Commercial realtime viewing software either fails to address the needs of deaf and hard of hearing individuals who rely on realtime transcriptions for communication access or addresses them half-assedly as an afterthought.

The UI is still clunky and appears antiquated. Limited in its ability to change font sizes, line spacing, and colors, it makes it unnecessarily difficult to access options that are frequently crucial when working with populations with diverse sensory needs. Options are sequestered behind tiny menus and buttons that are hard to hit on a tablet. One of the most glaring issues was the inability to scroll with a flick of the finger. Having not been updated since the widespread popularization of touch interfaces, there is practically no optimization for this format. To scroll, the user must drag the tiny scroll bar on the far edge of the screen with high precision. Menus and buttons clutter the screen and take up too much space and everything just looks ugly.

  1. Though the software supports sending text via the Internet or locally via Wi-Fi, most institutional Wi-Fi is either not consistently strong enough, or too encumbered by security restrictions to send captions reliably to external devices.

Essentially, unless the captioner brought his or her own portable router, text would be unacceptably slow or the connection would drop. Additionally, unless the captioner either has access to an available ethernet port into which to plug said router or has a hotspot with a cellular subscription, this could mean the captioner is without Internet during the entire job.

Connection drops are handled ungracefully. Say I were to briefly switch to the room Wi-Fi to google a term or check my email. When I switch back to my router and start writing, only about half of the four tablets usually survive and continue receiving text. The connection is very fragile so you had to pretty much set it and leave it alone.

  1. Makers of both steno translation software and realtime viewing software alike still bake in lag time between when a stroke is hit by the stenographer and when it shows up as translated text.

A topic on which Mirabai has weighed in extensively — most modern commercial steno software runs on time-based translation (i.e. translated text is sent out only after a timer of several milliseconds to a second runs out). This is less than ideal for a deaf or hard of hearing person relying on captions as it creates a somewhat awkward delay, which, to a hearing person merely watching a transcript for confirmation as in a legal setting would not notice.

  1. Subscription-based captioning solutions that send realtime over the Internet are ugly and add an additional layer of lag built-in due to their reliance on Ajax requests that ping for new text on a timed interval basis.

Rather than utilizing truly realtime technologies which push out changes to all clients immediately as the server receives them, most subscription based captioning services rely on the outdated tactic of burdening the client machine with having to repeatedly ping the server to check for changes in the realtime transcript as it is written. Though not obviously detrimental to performance, the obstinate culture of “not fixing what ain’t broke” continues to prevail in stenographic technology. Additionally, the commercial equivalents to Aloft are cluttered with too many on-screen options without a way to hide the controls and, again, everything just looks clunky and outdated.

  1. Proprietary captioning solutions do not allow for collaborative captioning.

At Kickstarter, I started providing transcriptions of company meetings using a combo of Plover and Google Docs. It was especially helpful to have subject matter experts (other employees) be able to correct things in the transcript and add speaker identifications for people I hadn’t met yet. More crucially, I felt overall higher sense of integration with the company and the people working there as more and more people got involved, lending a hand in the effort. But Google Docs is not designed for captioning. At times, it would freeze up and disable editing when text came in too quickly. Also, the user would have to constantly scroll manually to see the most recently-added text. 

With all these frustrations in mind, and with the guidance of the GA instructors in the class, I set off on building my own solution to these problems with the realtime captioning use case specifically in mind. I wanted a platform that would display text instantaneously with virtually no lag, was OS and device agnostic, touch interface compatible, extremely stable in that it had a rock solid connection that could handle disconnects and drops on both the client and provider side without dying a miserable death, and last but not least, delivered everything on a clean, intuitive interface to boot.

How I Went About It

While expressing my frustrations during the class, one of the instructors informed me of the fairly straightforward nature of solving the problem, using the technologies the class had been covering all semester. After that discussion, I was determined to set out to create my own realtime text delivery solution, hoping some of what I had been transcribing every day for two months had actually stuck.

Well, it turned out not much did. I spent almost a full week learning the basics the rest of the class had covered weeks ago, re-watching videos I had captioned, putting together little bits of code before trying to pull them together into something larger. I originally tried to use a library called CodeMirror, which is essentially a web based collaborative editing program, using Socket.io as its realtime interface. It worked at first but I found it incapable of handling the volume and speed of text produced by a stenographer, constantly crashing when I transcribed faster than 200 WPM. I later discovered another potential problem — that Socket.io doesn’t guarantee order of delivery. In other words, if certain data were received out of order, the discrepancy between what was sent to the server versus what a client received would cause the program to freak out. There was no logic that would prioritize and handle concurrent changes.

I showed Mirabai my initial working prototype and she was instantly all for it. After much fuss around naming, Mirabai and I settled on Aloft as it’s easy to spell, easy to pronounce, and maintains the characteristic avian theme of Open Steno Project.

I decided to build Aloft for web so that any device with a modern browser could run it without complications. The core server is written in Node. I used Express and EJS to handle routing and layouts, and JQuery with JavaScript for handling the dynamic front-end bits.

Screenshot of server code
Screenshot of server code

I incorporated the ShareJS library to handle realtime communication, using browserchannel as its WebSockets-like interface. Additionally, I wrapped ShareJS with Primus for more robust handling of disconnects and dissemination of updated content if/when a dropped machine comes back online. Transcript data is stored in a Mongo database via a wrapper, livedb-mongo, which allows ShareJS to easily store the documents as JSON objects into Mongo collections. On the front end, I used Bootstrap as the primary framework with the Flat UI theme. Aloft is currently deployed on DigitalOcean.

Current Features

  • Fast AFtext delivery that is as close to realtime as possible given your connection speed.
  • OS agnostic! Runs on Mac, Windows, Android, iOS, Linux, anything you can run a modern Internet browser on.
  • User login which allows captioner to create new events as well as visualize all past events by author and event title.
  • Captioner can delete, modify, reopen previous sessions, and view raw text with a click of a link or button.
  • Option to make a session collaborative or not if you want to let your viewers directly edit your transcription.
  • Ability for the viewer to easily change font face, font size, invert colors, increase line spacing, save the transcription as .txt, and hide menu options.
  • Easy toggle button to let viewers turn on/off autoscrolling so they can review past text but be quickly able to snap down to what is currently being written.
  • Ability to run Aloft both over the Internet or as a local instance during cases in which a reliable Internet connection is not available (daemonize using pm2 and access via your machine’s IP addy).
Aloft homepage
Aloft homepage appearance if you are a captioner

In the Works

  • Plugin that allows captioners using commercial steno software that typically output text to local TCP ports to send text to Aloft without having to focus their cursor on the editing window in the browser. Right now, Aloft is mostly ideal for stenographers using Plover.
  • Ability for users on commercial steno software to make changes in the transcript, with changes reflected instantly on Aloft.
  • Ability to execute Vim-like commands in the Aloft editor window.
  • Angularize front-end elements that are currently accomplished via somewhat clunky scripts.
  • “Minimal Mode” which would allow the captioner to send links for a completely stripped-down, nothing-but-the-text page that can be modified via parameters passed in to the URL (e.g. aloft.nu/stanley/columbia?min&fg=white&bg=black&size=20 would render a page that contains text from job name columbia with 20px white text on a black background.

That’s all I have so far but for the short amount of time Aloft has been in existence, I’ve been extremely satisfied with it. I haven’t used my commercial steno software at all, in favor of using Aloft with Plover exclusively for about two weeks now. Mirabai has also begun to use it in her work. I’m confident that once I get the add-on working to get users of commercial stenography software on board, it’ll really take off.

Using Aloft on the Job
Using Aloft on the Job on my Macbook Pro and iPad

TL;DR

I was captioning a web development course when I realized how unsatisfied I was with every commercial realtime system currently available. I consulted the instructors and used what I had learned to build a custom solution designed to overcome all the things I hate about what’s already out there and called it Aloft because it maintains the avian theme of the Open Steno Project.

Try it out for yourself: Aloft.nu: Simple Realtime Text Delivery

Special thanks to: Matt Huntington, Matthew Short, Kristyn Bryan, Greg Dunn, Mirabai Knight, Ted Morin