And now for a word from My Mouthpiece...

Hey kids, Jeff here with something new for sufferers of expressive aphasia or the losing the ability to say what you mean to say. Or write. As my aphasia has gotten worse I have had to give some serious thought to what I would do if I were alone (without my caregiver), at the emergency room and needed to convey important information to the ER nurse...or I was out in public and for some reason was stopped by the police...or for the more common problem of some mornings when I feel the worst, I am least able to express what is wrong to my caregiver and so just give up and live with the misery.

Most of the solutions I have seen so far have either relied on language too much (which is why even sign-language users can suffer from aphasia, just like anyone else) or were too limited in scope. For example I can have the best talker app in the world but it will be worthless if it cannot list my current medications for a doctor or to pass on the phone number of my caregiver during one of my occasional brushes with the law.

I am an ex-engineer so thats how I tend to think but I have never had to tackle a problem like this before so to help myself, I needed to do some experimentation to figure out what will work, what will actually help and what won't. To further this testing, I wrote a quicky (and fairly lame) Python app called My Mouthpiece, a reference to how lawyers were referred to in the gangster era. Right now its working under the principle of having what I call "topic" files that are nothing more than text files with lines of text that are either full sentences or sentence fragments that can be stitched together to say something more sophisticated than "My name is Jeff".

To use it, say I wanted it to say the following to someone, perhaps a medical professional:

"My name is Jeff Cobb, I am a 57 year old with Lewy Body Dementia. I have problems with expressive aphasia, cognitive defects and loss of motor skills. My current medications include Norco 10 mg 3 times daily."

Those lines are three lines from my medical.txt file that looks like this:

jeff@jeff-Oryx-Pro:~/bin/mmplib$ cat medical.txt
My name is Jeff Cobb, I am a 57 year old with Lewy Body Dementia.
I have problems with expressive aphasia, cognitive defects and loss of
motor skills.
My current medications include Norco 10 mg 3 times daily.
Please speak slowly.
If you need more information, please contact my caregiver Beth Cobb.
So to say what I needed to, on my laptop I can open a window and type:

jeff@jeff-Oryx-Pro:~/dev/mymouthpiece$ ./ -s medical.txt=abc

At which point my app opens medical.txt, grabs the first three lines (abc; abcf would have grabbed the first three and the sixth lines), concatenate them together, then used Google TTS tech to convert the text into human-sounding speech which is then saved as an MP3 file which came out like this:


And the default sound player plays it through the speaker.

I am developing a library of simple topic files, from general stuff like where I live, who I am, how to contact my caregiver to shopping to status (I feel hungry, thirsty, I am fine, I am in pain, etc) and anything else I can think of. 

Right now this is just proof of concept, to see what works and what doesn't. The goal is to develop a system that is more centered around sentence components and not full-speeches, where the components can be combined in a number of ways not envisioned by the programmer to produce unique and meaningful language. Also, by focusing the input on a more symbolic system, the pitfalls of standard language will be avoided. Possibly using gestures to convey intended meaning.

I am also working with a UK charity on an app that lives on the cell phone or tablet which does a much simpler subset of this (point at a picture, it speaks a line) and actually the outcome of these experiments will help that effort....

I will post more as it happens....still even this bit is kind of exciting for me and frankly better than anything else I have tried so far. I do want this or something like it to live on the cell phone so the speech might be piped through a connection of a voice call, or the text part simply going through the texting mechanism....

The problem is the programmer part of my brain is broken AF making even this piddly little thing feel like brain surgery....there there is real question of what I can actually do with this project. Worst comes to worst, it might help give someone else some ideas after I am gone....


UPDATE 28 June: I have added a simple hack that has been on my wish-list from day-one: the ability to stack or compile a response from a variety of sources. For example, currently in my "mouthpiece" library I have the following topic files:

caregiver.txt = info for new caregivers 
general.txt  =  general information about me (contact info, etc)
lbd.txt  = short description of Lewy Body Dementia
medical.txt  = information such as a medical summary, medication list.
mmp.txt  = Oddly when I got the first version of this done I could not tell anyone about it because I was in a bad fog and could barely speak. So I hacked this thing out to explain what My Mouth Piece does.
status.txt =  short status descriptions (I am tired, I am fine, I am hungry, etc)

Until now however I could only "speak" from one topic at a time.  As of a few minutes ago I just finished testing a version where I can (with a single command) request line 1 from the general library, lines 2-4 from the medical topic and line one from the status library by simply doing:

mmp -s general.txt=a -s medical.txt=bcd -s status.txt=a

I have also contacted the author of the main tool I use to do the speech conversion (gTTS for google Text To Speech, same engine as on your phone) for how to change the voice....