Lomoji: A New Tool for Polyglots

I often make the analogy that trying to be a polyglot in an English-speaking country is like trying to maintain a hobby for keeping igloos somewhere non-polar. Without year-round cold, you either have to keep adding snow, or watch in resignation as your bespoke collection of icy enclosures melt away.

Language atrophy happens especially quickly for us English speakers because everywhere, it’s hot. It’s far more likely the person you’re talking to can speak English better than you can speak their native language or they underestimate your abilities. So chances are that unless you’re already super fluent in another language, whenever you try to practice it, you’ll very likely get a, “Wow! Your [insert language] is very good!” and the interaction will resume in English. (Ugh.) And even though I live in New York City, it’s still an effort to keep all of my igloos from turning into puddles of mush.

So I keep adding snow every day (i.e. artificially force exposure to all my languages constantly). Like a good polyglot, I have Google Translate on the first page of my iPhone’s home screen. Whenever I have random moments of pause throughout the day when I realize, “Wait. How would you say [word/phrase] in [language A]?” I’ll stick it into Translate for a quick answer.

It's sometimes a pain in the butt using Google Translate as a polyglot as it's often the case that you have to manually translate multiple times.
One translation at a time?! Ugh.

But wait. I’ve also just realized I don’t know how to say that in BC, D, or E! Okay. Hit X. Select B. Cool. Hit X. Select C. Cool. Hit X. Select D… this quickly gets tiresome because each time, you have translate each one manually. The Google Translate app only keeps a history of your five last-translated languages.

Gif of Kim Kardashian going, "Like, I'm so annoyed."
Okay. Maybe not that annoying.

As we polyglots tend to, I “dabble” quite a bit  — enough to notice how frequently I was having to scroll back to that huge list even for my most-used languages. On a random day when I wanted a reminder of how to say “shelf” in Spanish I had to manually select from the master list it because it had gotten bumped off the last-used list by random forays into Danish and Portuguese.

WHERE'S MY SPANISH?
WHERE’S MY SPANISH?

I wanted something where I could select ALL the languages I’m currently “dabbling” in so I could view translations for all of them concurrently. Well, I had never implemented the Google Translate API in anything before, so I took this as a learning opportunity. I went ahead and created myself an account on Google Cloud Console, got myself an API key, and started playing.

The result is Lomoji.

Screenshot of Lomoji, an iOS app that shows a text box and a green button with a back and forth arrow to indicate translate action.
Lomoji main screen

Lomoji is a small React Native app that lets you select multiple target languages, and persist them in AsyncStorage so that whenever you perform a translate action, with just a quick glance, you’re instantly reminded of how to say them in all your languages. The languages only disappear when you uncheck them; not because you happened to translate a few small things in five other languages prior.

First select the language you want to translate from…

 

Selecting languages in Lomoji
Selecting a “from” language

Then select your destination language(s)…

Dropdown menu of languages in Lomoji
Dropdown menu of “to” languages

After hitting “confirm,” we have our nice list of igloos!

Selected from and to languages in settings panel
Completed settings panel

Enter the string to be translated and… voilà!

Panel showing final translations in all destination languages
Panel showing final translations in all destination languages

THAT’S WHUT I’M TALKIN’ ABOUT.

Beyoncé throwing up two sign of the horns.
WHOOP, WHOOOP!

Google Translate makes it easy to get an updated list of all available languages. As noted in their documentation, you just make an API request that Translate responds to with a JSON object to get the most up-to-date list. So I made the component so that it asynchronously checks the list of available languages, and updates the internal list before defaulting to loading the in-built store of languages in case that fails. This makes it future-friendly as support for languages are added (or perhaps removed) by Google.

async componentDidMount() {
    await axios
        // Get most recent list of langs from Google.
        .get(`https://translation.googleapis.com/language/translate/v2/languages?key=${config.apiKey}&target=en`)
        // If that was successful, set our internal state.
        .then((result) => {
            this.setState({
                langList: result.data.languages
            });
	})
        // If that fails, use default built-in list of languages.
	.catch((err) => {
	    this.setState({
		langList: languages.data.languages
             });
            alert(`Unable to retrieve list of languages from Google Translate.\nLocal list used instead.\n\n${err}`);
        });
            this.setState({assetsLoaded: true});
}

But that’s it for now! I think in a future iteration, it would definitely make for a more fluid user experience if the input language were detected automatically. Still a few loose ends to tie up but I was too excited not to not post about it. I’ll update as I move toward publishing it!

Oh, and if you’re wondering why I decided to call it “Lomoji,” it’s because in Korean, “X (eu)ro muoji” (X[으]로 뭐지?), where X is a language, is a phrasal suffix that roughly translates to, “How would you say it in X?” I simplified it and anglicized it to make it catchy to Western ears.

So “Yeongeo ro muoji?” (영어로 뭐지?),  where “yeongeo (영어)” is “English,” would translate to, “How would you say that in English?” Clever?

My Top Essential Plover Commands

Plover is the world’s first free and open-source stenography software. But unlike traditional computer-aided (machine-shorthand) translation systems, which typically resemble sandboxed word processors, Plover lets you use your steno machine as an OS controller/input device. Each has its own user base and use cases and I’m not saying one is inherently better or worse than the other. They fundamentally serve different audiences. However, one of the benefits of Plover is that it unlocks a whole new world of single-stroke commands one can use to control their machine. Here’s my list of the top commands that I use nearly every day. Keep in mind that I use macOS. If you use Windows, for most commands, you can just substitute “ctrl” for when I use “command” and you’ll be good to go. An exhaustive list of all the commands Plover has to offer is available here.

1. Switching workspaces

When you have several workspaces on macOS and you want to switch between them quickly.

SKR-R = {#control(left)}

SKR-G = {#control(right)}

“switch to the workspace to the right” (mnemonic hook: “screen-⬅”)

“switch to the workspace to the left” (mnemonic hook: “screen-➡”)

*Think of the RPBG keys as standard keyboard arrow keys.

2. Go to the end/beginning of the document

Useful for when you’re making edits while captioning and the speaker starts talking again, or you just want to snap to the bottom of the document you’re working on.

SR-RZ =  {#command(down)}

SR-FD = {#command(up)}

3. Git commands

TKPWHRA*UL (git add [all]) = {>}git add .

TKPWEUPLT = {>}git commit

TKPW*EUPLT = {>}git commit -m “{^}{-|}

TKPWHRURP = {>}git pull origin master

TKPWHRURB =  {>}git push origin master

TKPWAUTS = {>}git status

4. One-stroke money formatters

When people say sums of money in ways that make you stop and think where to put the commas and periods, do these instead.

TKHR-RS = {*($c)}

“Turn the previous number into a correctly-formatted sum of money.”

3 + TKHR-RS becomes $3

4.5 + TKHR-RS becomes $4.50

.33 + TKHR-RS becomes $0.33

THO*UDZ = {^}000{*($c)}

“Turn the previous number into thousands of dollars.”

4 + THO*UDZ becomes $4,000

200 + THO*UDZ becomes $200,000

PH*LDZ = {^}{#option(left)}${#option(right)}million

“Turn the previous number into millions of dollars.”

7 + PH*LDZ = $7 million

*Use the same paradigm to do trillions of dollars (TR*LDZ) and billions of dollars (PW*LDZ).

5. Start lower with no space, start upper with no space.

When you want to suppress automatic spaces and want to control the case of the first letter of the output.

KPA* = {^}{-|}

“Start text where my cursor is currently located and capitalize the first letter.”

STHROER = {^}{>}

“Start text where my cursor is currently located and make sure the first letter is lowercased.

6. Special characters that need to be escaped.

Certain characters that are part of Plover’s dictionary syntax need to be escaped correctly to define them so that Plover doesn’t get confused and misinterpret the entry.

KHRURB (curly brace open) = \{{^}

{

KHRURBS (curly brace close) = {^}\}

}

PWHR*RB (backslash) = {^\^}

7. Comment slashes

\

KPHERBS (comment slashes) = {^}//

// (hi)

KPH*ERBS (comment slashes + cap next) = {^}//{-|}

// (Hi)

TAEUBL/TPHR-P = (╯°□{#Shift_L}°)╯︵ ┻━┻

 (╯°□ °)╯︵ ┻━┻

*Note the required {#Shift_L} to prevent special-character encoding weirdness.

SKWHR*UG = ¯\_(ツ{#Shift_L}{^})_/¯

¯\_(ツ)_/¯

*Note the required {#Shift_L} to prevent special-character encoding weirdness.

8. Brightness up/down

PWR*P = {#MonBrightnessUp}

PWR*B = {#MonBrightnessDown}

*Again, think of the P and B as up and down on a standard keyboard arrow set.

9. Sound controls

SRAO*UP (volume up) = {#AudioRaiseVolume}

SRAO*UB (volume down) = {#AudioLowerVolume}

SRAO*UPLT (volume mute) = {#AudioMute}

10. On-the-fly speaker definition

For when you didn’t create a speaker definition for someone beforehand but you want to still mark them by their name in the transcript rather than using chevrons (>>).

SPOEU = {*<}{^:}{#Alt_L(Left)}{^\n\n^}{#command(right)}{^ ^}{-|}

Inserts a new line, puts the last name (or any word that comes prior to the stroke) in all caps, adds a colon, and capitalizes the next character.

“Josh” becomes \nJOSH:(capitalize next)

11. Go to end of line & add a semicolon

TKHRAO*EUPB (go to end of line) = {#command(right)}

SKHROEUPB (add semicolon to end of line — used frequently in writing JavaScript) = {#command(right)}{;}

12. CamelCase

KPW-PBG (cap next word and attach it) = {^}{-|}

set KPW-PBG attribute becomes setAttribute

KPHAO*EL (cap the first word and the next word, attach them) = {*-|}{^}{-|}

word KPHAO*EL press becomes WordPress\

13. Add comma at end of line, make new line

Handy for writing JavaScript objects or working with JSON.

KPHAPBL (comma + new line) = {#right}{,}{^\n^}{^}

14.  Indented parentheticals

 Used for captioning to mark things like laughter or applause.

KHRAFT (laughter) = {^\n^}{^\n^}{^\t^}{^}[ Laughter ]{-|}

Adds a new line, indents one tab and adds [ Laughter ].

15. Dot-slash

Used for pointing to the current working directory in bash.

TKHR*RB (dot slash) = ./{^}

16. “S” is for “sticky (parentheses & brackets)

Useful for writing function declarations and array pointers.

SPR-PB = {^}({^}

SPR-PBD (paren, double) = {^}()

SPWR*BG = {^}[{^}

So these are a few of the most important ones that I use for now. I’ll keep adding to this list as I think of more!

Hacking Music Accessibility Via Realtime Stenography (and Code): Recap

This March, I got to present at the NYC Monthly Music Hackathon about my experiments using stenography and coding to enhance accessibility of auditory information for those with hearing loss who use real-time captioning as a form of accommodation.

Photo of Stanley in front of an audience. On the left side there's a screen with the title slide of his talk and on the right is Stanley standing in front of another screen, to which he's transcribing himself.
If you think that looks like I’m transcribing myself, you would be correct!

Problem

While sign language interpreters can use their body and expressive, animated signing to convey the tone, rhythm, and various other qualities of music, real-time captioners are comparatively limited in their expressive capabilities. Deaf attendees who primarily use spoken language typically must rely on a combination of their residual hearing, vibrations, cues from the performer or audience members, and parenthetical descriptions inserted by the captioner to piece together a mental representation of what’s being heard.

Here’s a short clip of what live captioning looks like in action:

Despite having provided real-time captioning services for almost six years now, this was something I hadn’t considered at all until I was hired by Spotify to caption their February Monthly Music Hackathon. A deaf attendee, Jay Zimmerman, approached me and requested that I describe the audio segments during the demo portion as best I can. As a musician who had lost his hearing later in life, Jay wanted and appreciated a greater level of access than the simple music note, as often seen in TV closed captions, or simply the lyrics on screen. So I did the best that I could:

 On screen, it shows a bunch of dialogue and then parentheticals for [ Big Room House/EDM ] and [ Tiësto, Martin Garrix - "The Only Way Is Up" ].
Stanley’s captioning screen showing captions from Spotify’s Monthly Music Hackathon event.
For me, this wasn’t good enough. Even when the captioner knows a song well enough to nail the title, it still feels lacking. It just isn’t a suitable accommodation for someone who truly wants to experience the acoustic information in a more direct way. There’s just too much information to convey to sum it up in short parentheticals. Additionally, when I am captioning, I’ve found it difficult to come up with accurate and succinct descriptions on the spot when things are often happening very quickly in live settings.

  Brainstorming and Thought Process

A while back, I wrote a post on my project, Aloft. In short, it’s a cloud based caption delivery app which enables participants to access stenographers’ live transcription output on any Internet-enabled device with a modern browser. Its main use case was originally for those participating in events or lectures remotely:

Screenshot of a remote participant watching the keynote address at WordCamp US Nashville depicting a PowerPoint slide and the realtime captions underneath.
Example of Aloft being used by a remote conference participant.

But I actually use Aloft in all of my captioning work, including those that are on site where I hook up to a projector, drag the participant view page over to the monitor, and display it full screened like so:

Photo depicts a man in a wheelchair giving a talk while a full-screen view of live transcription is visible behind him.
Example of Aloft being used on-site at the March NYC Monthly Music Hackathon. Speaker: Patrick Anderson.
Image of a large audience at a tech conference watching the keynote with live captions above the presentation slides.
Example of using Aloft for large-event captioning at WordCamp US 2017.

Well, one of the great benefits of working with software you wrote yourself is that, as you encounter different needs, you are free to modify your tools to meet them. In fact, one of the reasons I created Aloft was because of how anoyingly restrictive and unintuitive commercial legal-transcription viewing platforms typically are with respect to seemingly simple things like being able to change the font size or the color palette.

So that got me thinking…

What if we could simultaneously convey the transcription of spoken content in addition to some visual representation of the audio?

Initial Research

At first, I didn’t know where to start since, at the time, I hadn’t really dealt with computer audio processing at all so I began with researching different JavaScript packages (since it needed to be web compatible and I didn’t really want to deal with having to run code through an interpreter). I came across a package called p5.js, which a lot of the folks at the hackathon seemed to have either heard about already, or used in the past. The Frequency Spectrum add-on seemed pretty promising.

 

Screenshot depicting the demo page for the Frequency Spectrum add-on. A frequency spectrogram (squiggly lines on a light gray background) and some JavaScript code beneath are visible.
P5’s Frequency Spectrum add-on

Implementation

So I created it a special route in Express so it would only live on a page that I could point the browser to only during the occasions I need it. We don’t want all those extra scripts loading when I’m doing just normal captioning!

app.get('/soundbar/:user/:event', function(req, res) {
	Event.findOne({'user': req.params.user, 'event': req.params.event},
	function(err, event) {
		if(err) {
			throw err;
		} else {
			res.render('watch-soundbar', {
				'user': req.params.user,
				'event': req.params.event,
				'marker': req.params.marker
			});
		}
	});
});

So now, I would I would load the libraries only if I went to localhost:4000/soundbar/stanley/monthly-music-hackathon.

Then, it was just a matter of importing the libraries into the view, and initiating p5 via the code provided on the demo page with some minor modifications.

var mic, fft;

function setup() {
	let canvas = createCanvas(windowWidth, windowWidth/5);
	canvas.parent('waveform');

	noFill();

	mic = new p5.AudioIn();
	mic.start();
	mic.amp(10);
	fft = new p5.FFT();
	fft.setInput(mic);
}

function draw() {
	background('#{bgColor}');
	stroke('#{fontColor}');
	strokeWeight(3);

	var spectrum = fft.analyze();

	beginShape();

	for (i = 0; i<spectrum.length; i++) {
		vertex(i, map(spectrum[i], 0, 255, height, 100) );
	}

	endShape();
}

function windowResized() {
	resizeCanvas(windowWidth, 250);
}

A few changes I made were: I modified the canvas width so that it automatically calculates the height and width based on the parent window, so it would always be proportional to the size of the size of the captioning screen. Since we don’t know what the foreground and background colors are, as they’re dynamically set by an Angular service, I passed those in as variables that are pulled from local storage via an Angular service, and sent to the controller. Finally, I passed in the div to which it should display and we have a working audio waveform at the bottom of the screen!

 Screenshot of a web page displaying real-time transcriptions with a squiggly-line audio frequency spectrum displayed at the bottom.
View of frequency spectrum at the bottom of the Aloft viewer page.

But, wait a minute. What if I don’t want to see the “soundbar” all the time? What if I only want it to be displayed when appropriate, as in during a live musical demo or performance? Well, why don’t I create a command that I can invoke from the steno machine that would turn it on and off? So I arbitrarily decided that &SOUNDBAR would be the flag I would use to turn the soundbar on and implemented it like so:

// If the number of &SOUNDBARs is even, toggle between show and hide.

if ((text.match(/&SOUNDBAR/g) || []).length % 2 === 0)  {
	soundbar.style.display = 'none';
	captionArea.style.paddingBottom = '4rem';
} else {
	soundbar.style.display = 'inherit';
	captionArea.style.paddingBottom = '0rem';
}

So essentially, every time the backend receives text, before it modifies the DOM, it uses a regular expression to check to see whether the sequence of characters matches &SOUNDBAR, and if the number of &SOUNDBARs present in the transcript is odd, then display. If it’s even, hide it. Probably not the most optimal solution in that the microphone is activated and also performing the calculations in the background whether or not the soundbar is visible, but it works for now.

Now, the soundbar remains hidden until I choose to show it by writing SPWA*R on my steno machine, which spits out &SOUNDBAR in the transcript. As soon as I don’t need it anymore, I write SPWA*R again, the number of soundbar flags on the page is now even, and it hides the div.

 Screenshot of two screens, one of the captioner's screen where you can see the captioner wrote "&SOUNDBAR" to flag Aloft to open the soundbar. On the second screenshot, you can see the squiggly frequency spectrum "soundbar" displayed.
How the &SOUNDBAR flag is used by the captioner to display the frequency spectrum.

An obvious drawback is that if the consumer wants a copy of the transcript, I would have to go back and delete all the occurrences of &SOUNDBAR but that’s a problem to solve for another day. Another problem is that p5 is always running in the background even when the div is hidden. This is a limitation of package itself, unfortunately, since I’ve encountered trouble loading it via some event handler as opposed to how it was designed to work: on page load.

A limitation that Jay pointed out was that, while the soundbar solution is great for visualizing instantaneous peaks, what’s loud or quiet at the current moment. It’s relatively poor at visualizing temporally, what’s going on. So I played around with another possible solution, this time using a different Web Audio-based package called Awesome Realtime Audio Visualizer in JS, HTML created by Will Brickner. His audio visualization spectrogram conveys the frequency on the vertical axis, time on the horizontal axis, and intensity through pixel brightness. When the visualizer is activated, a yellow line scans across the screen from left to right, drawing a spectrogram in real-time, giving one more of a temporal idea of the auditory content:

Screenshot of a captioning window with a dynamic audio spectrogram as the complete background.
Demo of spectrogram with “Crowd Control” – Fisher playing in the background.

In contrast to the soundbar, this visualizer takes up the whole screen and becomes the entire page’s background. In the view, it lives as an HTML canvas object. As with the soundbar example, this implementation also has its own route (/spectrogram) and it also allows the captioner to turn the audio visualizer on and off depending on need.

Final Thoughts and Next Steps

The biggest challenge is really being able to convey enough depth to make the experience as immersive and inclusive as possible. While these two solutions are a great start, perhaps combining them somehow into one page would let the user benefit from both formats. Or better yet, incorporate a more-complex visualizer to allow for more bandwidth. As Jay puts it:

Both give a sense of amplitude and pitch, with amplitude being easier to perceive so we can see when sound is happening and whether or not it is soft or loud.

The issue with frequency/pitch information is how to make it clear and easy to decipher, which leads to linear vs exponential or logarithmic graphing.

Easiest way to comprehend pitch/frequency quickly is on the vertical, so high pitches are high and low pitches are low.

So what if we could harness a different axis like color to indicate volume, leaving the y-axis free for pitch and the x-axis to denote time? What if instead of a line or a sequence of bars, we had a 3D object that morphs and changes in size in response to different volumes or frequencies? But then that leads to the question: at what level of complexity would the visuals negatively affect the legibility of the captioning?

Q&A CART & Captioning Edition

I started this due to the large number of messages I receive regarding CART and captioning so I decided to make a post that answers the most frequently asked ones to make it easier and quicker to distribute my answers to them.

Q. I heard you shouldn’t use briefs in CART or captioning. Do I have to stop/get rid of them?

A. Absolutely false. I frequently write strokes containing three words and I do fine. So, no.

Q. What is the best equipment to use?

A. This is personal. I use a Macbook Pro running macOS and I use either my Luminex or my Lightspeed. The text is usually sent to an iPad. There is no best set of equipment. It’s what you prefer. Things to consider are size of the computer and writer, hard disk space, memory, number of USB ports, aesthetics, etc. Again, it’s what you prefer/need, so you have to do your own homework to determine what those parameters are.

Q. What is the best way to get started in captioning?

A. First, I would recommend building a strong dictionary and get to a point where you feel very comfortable writing realtime at over 200 WPM. Then, work on fingerspelling and make sure you can consistently do it well even when the tempo picks up. Finally, I’d recommend you sit in with a professional captioner and write along a few jobs and maybe have him/her evaluate your work before you set off on your own. Other than that, it’s a matter of practice and getting familiar with the subject matter you encounter.

Q. What software do you use?

A. I use a combination of Plover to translate steno and Aloft to send the text to devices.

Q. Why don’t you use conventional (proprietary) stenography translation software like Eclipse or Case CATalyst? 

A. I am a strong proponent of the Open Steno Project and I prefer the greater control and flexibility afforded the user by using a steno-based system of direct OS control rather than a locked-in word processor like all currently available commercial options. Also, if I need a feature that does not exist yet, it’s a simple matter of working with Plover developers to get it added to the list of proposed new features.

 

Q. Can I use Plover or Aloft?

A. Plover is available for anyone to use for free. Aloft is not currently available for public use, not because I want to make it proprietary, but more because I do not have the time to act as tech support if there are issues.

Q. Does Plover work with my machine?

A. Check this list for supported hardware.

Q. I’m a fast typist. Is this a career for me?

A. Stenography skill depends much more on one’s ability to multitask and quick linguistic capability than finger dexterity. I’d say being a successful instrumentalist or a multiple language speaker are far more accurate predictors of success than simply being able to type fast.

Q. Do I have to attend a steno school to learn this skill?

A. School is not absolutely necessary. I didn’t spend a cent on steno school. It all depends on how motivated and disciplined you are to take on something like this which requires consistent unyielding dedication to achieve anything within a reasonable time span. I’d say if you think you could teach yourself a language to fluency by yourself with the pressure of classroom instruction, you can do it. 

Q. Do I need to be certified to work?

A. Depends on where you live so you may have to ask someone in your state/province/country.

Q. What is the best theory?

A. This is also a matter of personal preference. I use Magnum and Philadelphia Clinic because I like briefs and writing short. You might prefer a more consistent (but longer) style of writing, in which case, a theory like Phoenix might be more appropriate for you.

Q. Do you charge per page of transcript?

A. No, captioning typically charges hourly. Usually a much higher hourly than legal reporting.

Q. Do you edit your transcripts if they ask for one?

A. Not heavily. Your realtime should be nearly perfect with the occasional error here and there for captioning anyway.

Q. What is that harness thing you use for walking captioning?

Harness I use to use my steno machine while walking.
Harness I use to use my steno machine while walking.

A. That’s a Connect-A-Desk.

Q. Do you have any tips for improving my realtime?

A. Practice and put everything in your dictionary.

Q. Can I also steno emojis?

A. Not unless you use Plover.

Q. What happens if you encounter a term or name you don’t know? 

A. Fingerspell the best you can. The goal here is readability so it’s imperative that you be proficient at approximating words you don’t know or fingerspelling out things for which you either aren’t sure you have an entry or know for sure you don’t have in your dictionaries. I would highly discourage blasting through words syllabically you aren’t sure will translate (as you would in court reporting).

Q. How do you caption foreign languages?

A. I come up with a theory for the language, then after that it’s just a matter of what doing what anyone must do with steno: Practicing and adding entries in your dictionary.

Q. Is captioning easier than court reporting?

A. No.

Q. Is any commercial steno software better than the other for CART or captioning?

A. No, they’re all pretty equivalent. It’s really a matter of personal preference.

Q. Is it ever okay to write “TIA” at the end of a post online to express gratitude?

A. No. Saying “thanks” is enough. TIA actually stands for “this is awkward.”

Vragen Van Theo

Several months ago, I had a pretty comprehensive discussion with a Dutch colleague, Theo Tomassen, who had sent me questions regarding my general procedures as a captioner. Seeing as how there is very little material on stenography on the Internet in the Dutch language, I thought I’d share his questions and my responses on my blog. It is still not complete as Theo is working on his responses to my questions I posed in response but I will post a follow-up as soon as I have them! What follows is completely in Dutch. If you don’t speak Dutch but would like to know what was said, there’s always Google Translate :]. Also, please keep in mind that Dutch is not my native tongue.

Een paar maanden geleden, had ik een hele uitgebreide discussie met een Nederlandse collega, Theo Tomassen, die stuurde mij een aantal vragen over mijn algemene procedures als schrijftolk. Aangezien het gebrek aan publicaties hierover in de Nederlandse taal op het internet, heb ik besloten om zijn vragen en mijn antwoorden op mijn blog te delen. Het is nog niet klaar want ik wacht op Theos antwoorden naar aanleiding van mijn vragen maar ik post ze hierop zodra ik ze heb! Het navolgende is compleet in Nederlands. Als je geen Nederlands spreekt en je wilt graag even weten wat er gezegd werd, is er natuurlijk altijd Google Translate :]. Bovendien, hou er rekening mee dat het Nederlands niet mijn moedertaal is.


THEO: Mijn belangrijkste vraag is wat langer, die moet ik even inleiden. Deze heeft betrekking op finger spelling. Ik heb je verteld dat ik voor Nederlands werk als gediplomeerd schrijftolk (“CARTer”) voor doven en slechthorenden. De vraag naar Engels is groot, maar er zijn maar weinig schrijftolken die dat willen/kunnen. Echter, het Engels wordt vaak gesproken door Nederlanders. Dit betekent dat er regelmatig Nederlandse instellingen, plaatsnamen, namen van personen worden genoemd die uiteraard niet in mijn dictionary staan.

STANLEY: Dat is geweldig te horen! Ik vond Amsterdam heel, heel prachtig wanneer ik deze zomer op bezoek was. Ik begon over te denken de mogelijkheid van misschien daarheen verhuizen en werken maar voor nu, geniet ik van het leven in New York. In Januari, woon ik hier pas twee jaren. Ik kom oorspronkelijk uit Seattle.

THEO: Stel dat een spreker zou zeggen: “Ladies and gentlemen, I should now like to give the floor to Theo Tomassen, president of the Nederlandse Schrijftolken Vereniging.” Ik weet niet zeker of het praktisch om dergelijke woordklonten steeds allemaal te vingerspellen op de Diamante. Ik weet dat Stenograph een QWERTY-toetsenbord heeft dat je op de Diamante kan bevestigen. Ik kan heel snel typen op QWERTY, die snelheid zal ik nooit halen op de Diamante.

STANLEY: Wat mij betreft, zou ik nooit een extern toetsenbord gebruiken. Je zou wat heel tijd vergen voor wisselen tussen de beide als je moet dat vaak doen.

THEO: Vraag dus: is het een rare werkwijze om te wisselen tussen de Diamante en dat toetsenbord in dergelijke gevallen? Ik snap dat dit op een snelheid van 230 woorden per minuut misschien lastig is, maar in Nederland onderbreken de tolken de spreker als ze op dit soort abnormale snelheden praten.

STANLEY: Zoals ik al zei: Ik zou alleen op het klaarmaken (prep) en op fingerspelling vertrouwen voor woorden die ik niet in mijn woordenboek heb.

THEO: Als je aan het CART’en bent en er valt een woord waarvan je sterk het vermoeden hebt dat het niet in je woordenboek staat, vingerspel je dat dan of kies je een synoniem?

STANLEY: Bijna altijd fingerspelling. Ik zou een synoniem kiezen alleen als ik geen andere keuze heb (bv, de spreker gaat te snel of ik kon niet het precies woord horen).

THEO: Schrijf je tijdens het CART’en alle hoofdletters —

STANLEY: Nee. Mixed case, altijd ;].

THEO: contractions voor “he’s” en dergelijke?

STANLEY: Ik probeer doorgaans, een echte woordelijk verslag te maken dus schrijf ik, alles, zo veel mogelijk, precies hoe ik het heb gehoord, zelfs wanneer wat de spreker zegt is formeel niet goed Engels. Als hij “‘cause” zegt ipv “because”, dat is wat ik in de transcriptie zet. Want deze (in tegenstelling tot situaties in de rechtbank) zijn geen officieele verslagen, is het niet noodzakelijk altijd aan formele grammaticaregels te houden.

THEO: Als bijvoorbeeld een instelling “City Health Care” heet en het tempo is voor mij hoog, schrijf ik “city health care”.  Ik weet dat er een Case CATalyst een aanslag is om bijvoorbeeld de vorige drie woorden een hoofdletter te geven, maar het is toch allemaal weer bagage.

STANLEY: Ja, doorgaans als de snelheid is te hoog, zorg ik me niet zo veel over kleine details zoals hoofdlettergebruik. Het belangrijkste ding op dat moment is alle de woorden krijgen.

Als het mogelijk is, je moet gebruiken alleen macros die veranderingen maken voordat de text verzonden wordt. Met andere woorden, gebruik functies die het volgend woord een hoofdletter geeft (en niet het vorig). Met meeste systemen zoals Text-On-Top of broadcast encoders, je bewerkingen zullen niet weerspiegeld worden als je probeert naderhand veranderingen te maken. Laat me eens weten als je dit niet begrijpt.

THEO: Heb je bepaalde dingen in je dictionary gedefinieerd die bij het CART’en enorm handig zijn en tijdwinst opleveren?

STANLEY:

  • Een stroke voor het toevoegen van definities.
  • Gemeenschappelijke symbolen zoals @, $, _, %, <, >, <=, >=, -, –, =, !, daarnaast Griekse letteren, en definities voor Spaanse en Duitse letteren: á, é, ë, ä, enz. Ik heb zelfs alle de emoji gedefinieerd.

THEO: Wat is jouw mening over briefs? Als ik op de Facebook-pagina Supporting Court Reporter Students kijk, zie ik dat veel mensen voor de meest gekke termen een brief vragen. Termen die je niet zo vaak tegenkomt. Natuurlijk leveren briefs tijdwinst op, maar ik briefs voor zeldzamere termen onthoud ik toch niet.

STANLEY: Als ik moet een woord of zin meer dan twee keer schrijven, dat bestaat uit meer dan twee toetsenaanslagen, maak ik een brief. Ik heb weinig of geen probleem om veel briefs te memoriseren maar ieder mens is uniek qua talent dus ik denk dat je moet doen wat is voor jou de makkelijkste.

Ik wil je laten weten dat je moet niet alles wat je leest op die pagina serieus nemen. Hoewel er goede en handige informatie is, zit er heel veel advies dat het niet waar is, gebaseerd op ouderwetse meningen. Je moet altijd je eigen oordeel vormen nadat je een aantal mensen geraadpleegd hebt.

THEO: Zijn er dingen die je op de opleiding voor court reporter leert, maar die je werkelijk nooit zult toepassen in CART? Ik denk bijvoorbeeld aan “8 o’clock”. Dat zou je dan moeten schrijven als “8:00 o’clock”. Nu heb ik in mijn dictionary aanslagen gedefinieerd als /8 /KHRBG, dat wordt dan “8:00 o’clock”. Maar vanuit CART gezien vind ik het een beetje van de zotte om in alle gevallen aan een hypercorrecte schrijfwijze vast te houden. Of zie je dat anders?

STANLEY: Ik heb geen ervaring met in de rechtbank werken dus ik voel  me niet bevoegd/gekwalificeerd om deze vraag te beantwoorden.

THEO: Een beetje aansluitend op vraag 6: heb je misschien nog suggesties hoe ik mijn speedbuilding-traject zou kunnen aanpakken?

STANLEY: Je moet met verscheidene materialen oefenen. Wat is belangrijkste is niet te vervelen. Ik luisterde vaak naar de uitzendingen van howstuffworks.com. Ze zijn vrij op iTunes om te downloaden. Vind inhoud waarin je bijzonder geïnteresseerd bent. Je hoeft niet naar dezelfde saaie drills opnieuw en opnieuw luisteren. Kies, bv, een Engelse televisieprogramma of radioprogramma dat je leuk vindt en doe dat maar.

THEO: Ik zal nooit in mijn leven werk doen dat op court reporting lijkt, maar begin straks gelijk met CART’en. Het heeft voor mij dus geen zin om een heel goede court reporter te worden die niet kan CART’en: stel dat een snel gesproken tekst over “Chicago” gaat en ik heb geen brief daarvoor. Ik kan zou dan als court reporter telkens een “C” kunnen aanslaan, want ik onthoud wel dat het om “Chicago” gaat. Maar als CART’er kom ik daar niet mee weg. Ik moet/wil dus nu al op CART-kwaliteit studeren: de output moet een volledig leesbare tekst zijn, waarbij ik uiteraard een paar fouten wel accepteer. Maar een hele reeks untranslates die nog wel te lezen zijn voor een court reporter, slaat natuurlijk nergens op voor CART’en.

STANLEY: Ja, dat is heel onaanvaardbaar voor schrijftolken. Je moet voor de eerste keer, de woord dat bestaat niet in je woordenboek fingerspellen en laat de software je een briefsuggestie geven. Wanneer je een moment hebt, maak je dan een definitie die ga in je dict te blijven.

THEO: Zijn er nog bepaalde realtime-instellingen in Case CATalyst die ik beslist goed moet bekijken omdat ze handig zijn? Misschien gebruik je Eclipse of iets anders, maar ik denk dat er veel functies zijn die elkaar veel overlappen?

STANLEY: Kun je hier, specifieker zijn?

THEO: Welke setup gebruik je om de tekst draadloos door te sturen naar bijvoorbeeld een tablet van een klant? Case CATalyst heeft nu CASEview, maar  de licentie daarvan ik niet echt goedkoop. En ik moet volgens mij een wifi-verbinding hebben, wat ook niet overal mogelijk is.

STANLEY: In het verleden, ik gebruikte de vrije versie van Bridge maar ik heb onlangs een programma geschreven die voert deze functie uit! Bovendien is het vrij (voor nu). Je kan mijn niet-zo-korte beschrijving op mijn website lezen. http://blog.stanographer.com/aloft.

THEO: Voor mijn Nederlandse werk gebruik ik Text On Top voor een draadloze verbinding. Die kan ik ook met Case CATalyst gebruiken, maar de lay-out wordt dan niet overgenomen. Als ik een Enter geef, gebeurt er niets op het scherm van de klant. De ontwikkelaar heeft uitgelegd waarom dat zo is, gaat wat ver om dat allemaal uit te leggen. Maar het is wel zo.

STANLEY: Ik denk dat er een instelling is voor “verbatim mode” of zo waarbij je moet ctrl + enter drukken om een nieuwe lijn te maken en niet gewoon enter. Maar ik gebruik CC niet, dus ik weet niet wat zou dat gedrag kunnen veroorzaken.

THEO: Ik heb je videoblog over de Lightspeed gezien. Is het iets om te overwegen?

STANLEY: YES! I LOVE IT! Heb je specifieke vragen erover? Mijn god, ik kan niet langer in deze taal praten. Haha.

Aloft!

First off, apologies for the long radio silence. It’s been far too long since I’ve made any updates! But I just had to share a recent that I’m currently probably the most excited about.

[ Scroll down for TL;DR ]

To begin, a little background. For the past several months, I’ve been captioning an intensive web design class at General Assembly, a coding academy in New York City. Our class in particular utilized multiple forms of accommodation with four students using realtime captioning or sign interpreters depending on the context, as well as one student who used screen sharing to magnify and better view content projected in the classroom. A big shout-out to GA for stepping up and making a11y a priority, no questions asked.  

On the realtime captioning front, it initially proved to be somewhat of a logistical challenge mostly because the system my colleague and I were using to deliver captions is commercial deposition software, designed primarily with judicial reporting in mind. But the system proved to be less than ideal for this specific context for a number of reasons.

  1. Commercial realtime viewing software either fails to address the needs of deaf and hard of hearing individuals who rely on realtime transcriptions for communication access or addresses them half-assedly as an afterthought.

The UI is still clunky and appears antiquated. Limited in its ability to change font sizes, line spacing, and colors, it makes it unnecessarily difficult to access options that are frequently crucial when working with populations with diverse sensory needs. Options are sequestered behind tiny menus and buttons that are hard to hit on a tablet. One of the most glaring issues was the inability to scroll with a flick of the finger. Having not been updated since the widespread popularization of touch interfaces, there is practically no optimization for this format. To scroll, the user must drag the tiny scroll bar on the far edge of the screen with high precision. Menus and buttons clutter the screen and take up too much space and everything just looks ugly.

  1. Though the software supports sending text via the Internet or locally via Wi-Fi, most institutional Wi-Fi is either not consistently strong enough, or too encumbered by security restrictions to send captions reliably to external devices.

Essentially, unless the captioner brought his or her own portable router, text would be unacceptably slow or the connection would drop. Additionally, unless the captioner either has access to an available ethernet port into which to plug said router or has a hotspot with a cellular subscription, this could mean the captioner is without Internet during the entire job.

Connection drops are handled ungracefully. Say I were to briefly switch to the room Wi-Fi to google a term or check my email. When I switch back to my router and start writing, only about half of the four tablets usually survive and continue receiving text. The connection is very fragile so you had to pretty much set it and leave it alone.

  1. Makers of both steno translation software and realtime viewing software alike still bake in lag time between when a stroke is hit by the stenographer and when it shows up as translated text.

A topic on which Mirabai has weighed in extensively — most modern commercial steno software runs on time-based translation (i.e. translated text is sent out only after a timer of several milliseconds to a second runs out). This is less than ideal for a deaf or hard of hearing person relying on captions as it creates a somewhat awkward delay, which, to a hearing person merely watching a transcript for confirmation as in a legal setting would not notice.

  1. Subscription-based captioning solutions that send realtime over the Internet are ugly and add an additional layer of lag built-in due to their reliance on Ajax requests that ping for new text on a timed interval basis.

Rather than utilizing truly realtime technologies which push out changes to all clients immediately as the server receives them, most subscription based captioning services rely on the outdated tactic of burdening the client machine with having to repeatedly ping the server to check for changes in the realtime transcript as it is written. Though not obviously detrimental to performance, the obstinate culture of “not fixing what ain’t broke” continues to prevail in stenographic technology. Additionally, the commercial equivalents to Aloft are cluttered with too many on-screen options without a way to hide the controls and, again, everything just looks clunky and outdated.

  1. Proprietary captioning solutions do not allow for collaborative captioning.

At Kickstarter, I started providing transcriptions of company meetings using a combo of Plover and Google Docs. It was especially helpful to have subject matter experts (other employees) be able to correct things in the transcript and add speaker identifications for people I hadn’t met yet. More crucially, I felt overall higher sense of integration with the company and the people working there as more and more people got involved, lending a hand in the effort. But Google Docs is not designed for captioning. At times, it would freeze up and disable editing when text came in too quickly. Also, the user would have to constantly scroll manually to see the most recently-added text. 

With all these frustrations in mind, and with the guidance of the GA instructors in the class, I set off on building my own solution to these problems with the realtime captioning use case specifically in mind. I wanted a platform that would display text instantaneously with virtually no lag, was OS and device agnostic, touch interface compatible, extremely stable in that it had a rock solid connection that could handle disconnects and drops on both the client and provider side without dying a miserable death, and last but not least, delivered everything on a clean, intuitive interface to boot.

How I Went About It

While expressing my frustrations during the class, one of the instructors informed me of the fairly straightforward nature of solving the problem, using the technologies the class had been covering all semester. After that discussion, I was determined to set out to create my own realtime text delivery solution, hoping some of what I had been transcribing every day for two months had actually stuck.

Well, it turned out not much did. I spent almost a full week learning the basics the rest of the class had covered weeks ago, re-watching videos I had captioned, putting together little bits of code before trying to pull them together into something larger. I originally tried to use a library called CodeMirror, which is essentially a web based collaborative editing program, using Socket.io as its realtime interface. It worked at first but I found it incapable of handling the volume and speed of text produced by a stenographer, constantly crashing when I transcribed faster than 200 WPM. I later discovered another potential problem — that Socket.io doesn’t guarantee order of delivery. In other words, if certain data were received out of order, the discrepancy between what was sent to the server versus what a client received would cause the program to freak out. There was no logic that would prioritize and handle concurrent changes.

I showed Mirabai my initial working prototype and she was instantly all for it. After much fuss around naming, Mirabai and I settled on Aloft as it’s easy to spell, easy to pronounce, and maintains the characteristic avian theme of Open Steno Project.

I decided to build Aloft for web so that any device with a modern browser could run it without complications. The core server is written in Node. I used Express and EJS to handle routing and layouts, and JQuery with JavaScript for handling the dynamic front-end bits.

Screenshot of server code
Screenshot of server code

I incorporated the ShareJS library to handle realtime communication, using browserchannel as its WebSockets-like interface. Additionally, I wrapped ShareJS with Primus for more robust handling of disconnects and dissemination of updated content if/when a dropped machine comes back online. Transcript data is stored in a Mongo database via a wrapper, livedb-mongo, which allows ShareJS to easily store the documents as JSON objects into Mongo collections. On the front end, I used Bootstrap as the primary framework with the Flat UI theme. Aloft is currently deployed on DigitalOcean.

Current Features

  • Fast AFtext delivery that is as close to realtime as possible given your connection speed.
  • OS agnostic! Runs on Mac, Windows, Android, iOS, Linux, anything you can run a modern Internet browser on.
  • User login which allows captioner to create new events as well as visualize all past events by author and event title.
  • Captioner can delete, modify, reopen previous sessions, and view raw text with a click of a link or button.
  • Option to make a session collaborative or not if you want to let your viewers directly edit your transcription.
  • Ability for the viewer to easily change font face, font size, invert colors, increase line spacing, save the transcription as .txt, and hide menu options.
  • Easy toggle button to let viewers turn on/off autoscrolling so they can review past text but be quickly able to snap down to what is currently being written.
  • Ability to run Aloft both over the Internet or as a local instance during cases in which a reliable Internet connection is not available (daemonize using pm2 and access via your machine’s IP addy).
Aloft homepage
Aloft homepage appearance if you are a captioner

In the Works

  • Plugin that allows captioners using commercial steno software that typically output text to local TCP ports to send text to Aloft without having to focus their cursor on the editing window in the browser. Right now, Aloft is mostly ideal for stenographers using Plover.
  • Ability for users on commercial steno software to make changes in the transcript, with changes reflected instantly on Aloft.
  • Ability to execute Vim-like commands in the Aloft editor window.
  • Angularize front-end elements that are currently accomplished via somewhat clunky scripts.
  • “Minimal Mode” which would allow the captioner to send links for a completely stripped-down, nothing-but-the-text page that can be modified via parameters passed in to the URL (e.g. aloft.nu/stanley/columbia?min&fg=white&bg=black&size=20 would render a page that contains text from job name columbia with 20px white text on a black background.

That’s all I have so far but for the short amount of time Aloft has been in existence, I’ve been extremely satisfied with it. I haven’t used my commercial steno software at all, in favor of using Aloft with Plover exclusively for about two weeks now. Mirabai has also begun to use it in her work. I’m confident that once I get the add-on working to get users of commercial stenography software on board, it’ll really take off.

Using Aloft on the Job
Using Aloft on the Job on my Macbook Pro and iPad

TL;DR

I was captioning a web development course when I realized how unsatisfied I was with every commercial realtime system currently available. I consulted the instructors and used what I had learned to build a custom solution designed to overcome all the things I hate about what’s already out there and called it Aloft because it maintains the avian theme of the Open Steno Project.

Try it out for yourself: Aloft.nu: Simple Realtime Text Delivery

Special thanks to: Matt Huntington, Matthew Short, Kristyn Bryan, Greg Dunn, Mirabai Knight, Ted Morin

Stan’s Quick and Dirty Video: How Steno Works

Hey! Sorry for the lack of updates. But I wanted to share a new video I just made about how steno works for those who still don’t know. I wanted to make a very quick video that summed up all the main points in a way that was short and to the point so one of my students could use it for a class project about what it’s like in the day of a deaf person. Enjoy!

A Different Kind of Steno Hell: German

If you’ve missed the blatantly self-promotional vomit I’ve been unabashedly spewing all over Jade’s Stenoquery and on my own social media pages, you might not have known that I have accepted quite the unique challenge this semester: Captioning a semester-long intensive German class. I feel like I can barely manage to pull my head out of my ass when writing English realtime, but GERMAN?! HOW DO YOU EVEN? Besides not speaking the language at all prior to starting the class, stenoing a language class as opposed to a monolingual foreign language environment presents a number of unique problems that compound on the scads of details that we stenos are constantly juggling with in our heads whilst taking down just one language.

Let me explain why this situation is, without exaggeration, a venerable shitshow.

FML Subway
The subway lines I take to work.

1. Language classes are bilingual environments.

You’re dealing with two sets of vocabularies and trying to prevent their corresponding dictionaries from overlapping because you constantly need access to the entirety of each at all times. You might be tempted to assign EUBG for “ich” but that’s your -ic. Sie and sie? Well, SAOE is “see” and SAO*E is “sea.” ES can’t be “es” because ES is “he is.” TKU can’t be “du” because it’s “did you.” The list goes on and on. How the hell do you brief anything? If you’re a brief heavy writer like I am, pretty much all simple strokes are used up, including those containing the asterisk. What is one to do? I guess I don’t have that many strokes containing the number bar.

Solution: NUMBER BAR IT IS. Thus, ich = #EU, ich bin = #EUB, du = #2KU, sie = #150E, Sie = #150*E, etc. Thankfully if it’s multistroke, the German entry will seldom conflict with an English entry so for those you can just write them normally. TKPWE/SEUBGT for Gesicht won’t really conflict with anything in English. Nor will PHAOURT/HREURBG/SAOEUTS for mütterlicherseits.

Sometimes, I just define the English outline for certain words that look enough like English and mean the same thing in German, which, consequently means that while I’m writing, random German words sometimes pop up. But since they look enough like their English counterparts, I don’t worry about it being confusing. It just looks kinda ugly having English text with random capitalized nouns and German here and there: “I have a nice Präsentation for you today. But before that, kann you all turn in your completed Handouts?” But you can only do so much, ja?

2. Special characters.

It’s logical to expect heavy reliance on fingerspelling in a foreign language class but what if the word contains a vowel with an umlaut or the eszett (ß)? These letters make important morphological distinctions in German. “Bruder” is just one brother but “Brüder” is plural. Es gibt hier ein Apfel aber ich kann noch drei Äpfel in meiner Tasche haben (There is an apple here but I can have three more apples in my bag). You can’t just stick in the non-umlauted version because many words differentiate themselves solely by the presence of the two dots.

Solution: Make additional fingerspelling letters using some random ass combination of letters. I chose STK* for no reason at all. Thus, STKA* = ä, STKO* = ö, STK*E = ë, STK*U = ü, STK*S = ß, STKO*EU = äu.

3. German phonology doesn’t necessarily map neatly to what we have available on the American steno keyboard.

There are many ways in which German does not “fit” on the English-derived steno keyboard.

  • German has more vowels than English and many words are minimal pairs based on a vowel sound alone. Bruder/Brüder, schon/schön only differ by the raised vowel but some differ by a vowel and maybe something else attached to the end: Zahn/Zähne, Satz/Sätze. English steno theories do not have ways to handle these.

Solution: Reassign English steno combos to German values. Thus, AOU -> ü (Bruder = PWRURD, Brüder = PWRAOURD). AE -> ä (Satz = SATS, Sätze = SAETS). OE -> ö (schon = SHOPB, schön = SHOEPB).

  • German contains certain consonant combos that don’t exist in English: schreiben, schlappen schneit, schwarz.

Solution: Again, make some shit up. SKHR- = schr-/schl-, SKPH- = schn-, SW- = schw-. Thus, SKHRAOEUB = schreib-, SKHRAP = schlap-, SKHRAF = schlaf, SKPHAOEUT = schneit, SWARZ = schwarz.

*I haven’t yet found an instance where having both schr- and schl- represented as SKHR- would create conflict.

  • German morphemes don’t look like English ones, which makes tucking endings and such impossible using conventional English steno theory (The most common endings are not -ing, -ed, and -s/es so you can’t just throw those in intuitively for inflected forms). This fact makes writing German on an English steno machine incredibly sucky because PRACTICALLY EVERYFUCKINGTHING in German has mutable endings or vowels carrying fake IDs.
A
Um, but your passport says “Ä.” SECURRRITY SUHCURITEH!

So how do you avoid having to come back for every damn inflection of a word?

Solution: Assign new values to final consonants that would serve as tucked endings in English. -DZ = -en, -Z = -e, -SZ = -es, -T = -t, *S = st. Why -Z for -e? Jacked that shit straight from German steno. Thus, HABDZ = hab + en -> haben. SAGDZ = sag + en -> sagen. TRAGZ = trage. TKPWHRAUBZ = glaube. TKPWHRAUBDZ = glauben. If -Z doesn’t “fit” when adding -e, throw in * instead. So dachte becomes TKA*BGT, brachte becomes PWRA*BGT because you can’t fit BGT and -Z in the same stroke, obvs.

But you can have fun with this like in English. Make up a bunch of arbitrary shit that adds endings or phrases if it’s included in a stroke. Like, I’ve made the -Z into an x + Sie phrase ender. So haben Sie becomes HAEBZ, stehen Sie becomes STA*EUPBZ, können Sie becomes KO*EPBZ. -FB and -FT add the accompanying conjugations of haben in certain phrases. Ich habe -> EUFB, wir haben -> WEUFB, sie hat, er hat -> SWRAOEFT, EFT.

  • Condensing syllables.

The reason there’s an asterisk in stehen Sie and können Sie in the above examples of phrasing is because it means I’ve “skipped” a syllable. So, like, kön-nen is KÖN and ste-hen (which is pronounced more like “shtay-hun”) is STAIN. I don’t really have a nice, concise system yet but for now I use * sometimes to condense two syllables, one of which contains a stressed syllable followed by an unstressed schwa-sounding one.

Stehen -> STA*EUPB, können -> KO*EPB, sehen -> SA*EUPB, gehen -> TKPWA*EUPB.

3. Capitalization.

German capitalizes all nouns. Ein Gewitter is “a thunderstorm” while es gewittert means “it is thunderstorming.” Sometimes a noun and a verb stem look the same. Sie fragen means “you ask” but haben Sie Fragen? means “do you have questions?” So every time I write TPRAG, which I’ve defined as “frag-,” I have to take a split second to ask myself whether in this context, it is a noun or a verb to decide if I should hit KPA (cap next) right before.

Say a normally capitalized noun is combined into some compound — yeah, it’s no longer capitalized if it’s not the first word in said compound. “The restaurant” should be written das Restaurant but if you want to say “my favorite restaurant is,” it takes on the prefix Lieblings-. Hence, mein Lieblingsrestaurant ist would be the German equivalent.

This becomes a problem if, say, you’ve made a handy prefix HRAOEBLS for Lieblings-. Eclipse doesn’t have a “force next word to lowercase” command so if you just write having defined RAURPBT as “Restaurant,” you get LieblingsRestaurant — wrong.

Solution: Define everything that’s a compound. Fingerspell like a mothafucka. Define certain things that are commonly said as non-nouns lowercase, that way you can just throw your cap next stroke in before it to make it look right. But the trick is remembering to every-goddamn-time.

keyboard smash
LIEBLINGSLESIJGELLGFDMSDFGMKSDF

4. Not knowing the language.

Most anglophone stenos will tell you (usually with a characteristic balls-out, no-shame, pedantic/elitist/prescriptivist flair — note to U.S. court reporters: Please stop) how crucial having a vast vocabulary is when taking down English speech. Well, lucky you because your English is pretty on point, yeah? Too bad it means koala shit in this context. Of course there will be certain frequently used words and phrases that overlap with English (Think: Schadenfreude, Zeitgeist, gesundheit, etc.), but for the most part, German phonology and syntax differ enough that having an exquisitely developed ear for English will help nichts bis sehr wenig when writing German steno, or any other language steno for that matter.

I have a fairly extensive linguistic background. I graduated from UW with a degree in Linguistics, speak three languages fluently, and have studied four others to varying degrees but German is not one of them. I speak intermediate level Dutch, which I’ve come to learn, resembles German a great deal more than English. It may sound like I’m bragging but all of this, surprisingly, makes little difference. It’s still hard af. For all intents and purposes, I’m as much a beginner as most of the other students in class. The class is mostly taught in German. While the other students can sit there clueless when the teacher says something unfamiliar, I have to get something down. While the others can use class time to practice new words and phrases, I have to have them mastered by the very next time the teacher utters them. I have to constantly draw upon my linguistic knowledge and combine it with what I can infer based on my cute repertoire of German to approximate spellings for the relentless barrage of words I’ve never heard before. I feel constantly under pressure to stay 10 steps ahead everyone else through intense self-study and/or keeping my phone unlocked with Google Translate up at all times.

Just spellchecking.
Oh, just spellchecking you.

One of the ways being a learner has become most painfully apparent is when attempting to accurately transcribe German declensions. If you don’t know what declension (also known as “cases”) is, it’s basically a way certain languages handle grammatical roles in sentences. While analytic languages like English or Chinese rely heavily on word order to convey who does what to whom in a sentence whilst leaving most of the superficial aspects of the words alone (their sound and spelling). Synthetic languages like German tend to do this by forcing you to solve a timed Rubik’s cube every time you hit a noun. Run out of time? Congratulations, you sound like an idiot. English has remnants of a more case-heavy past in the form of subject vs. object pronoun pairs: He/him, she/her, I/me. Well, German has three genders, two numbers, and four cases. So at minimum, an article will generally have 16 forms. Some look the same superficially but serve different functions. The masculine nominative der (the) is the same as the genitive plural — der! It makes word order relatively free compared to English but the drawback is that trying to compose a simple phrase in German is bit like attempting to haul all 20 of your rabid feral cats in a red wagon across a busy intersection.

kitties
Oh, they’ll totally stay like that.

Sie trägt den Pullover

  • SWRAOE/TRAEGT/TKEPB/PHROUFR
  • She is wearing the sweater.

Der Pullover ist grün

  • TKER/PHROUFR/KWR*EUS/TKPWRAOUPB
  • The sweater is green.

Ich habe einen Hund in meinem Haus

  • EUFB/OEUPBD/HUPBD/TPH/PHOEUPL/HAUS **maybe make TPH*PLD in meinem??
  • I have a dog in my house.

Mein Hund ist im (in + den = im) Haus

  • PHOEUPB/HUPBD/KWR*EUS/KWREUPL/HAUS
  • My dog is in the house.

Das ist ein Haus

  • TKAS/OEUPB/HAUS
  • That is a house.

Wofür interessieren Sie sich?

  • WAOUFR/SPW*/SAOERPB/SWRAO*E/SEUFP
  • What are you interested in?

Woher kommst du?

  • WOER/KO*PLTS/#2KU
  • Where are you from?

Ich komme aus Seattle.

  • #EU/KPHA*US/SAOELGTS **Notice I phrased komme aus as KPHA*US. The * is supposed to stand for the -e in komme.
  • I come from Seattle.

The bolded portions of the above examples are to point out the various forms words like “the,” “a/an,” “my,” and “you” take in German. The last example is to illustrate how I applied both phrasing and inflectional principles into a single stroke.

On top of that, Germanic languages tend to stress the first syllable so a lot of times, unless a phrase is uttered very carefully, it’s hard to tell the difference between eine, einer, einen, einem, deine, deiner, deinem, kein, keine, keiner, keinen, keinem because the first syllable gets the stress and the rest goes straight to schwablivion.

Q. So what does all this amount to?

Don't Even Know.
Don’t Even Know.
German Steno
German Steno (Click to embiggen)

A. Some ridiculous fucking crazy shit.

The end. Tschüss (STAOUS)!