Vocola macro for multiple file search and replace in VIM editor using Dragon NaturallySpeaking

Returned to vim as primary IDE recently because for various reasons visual studio code wasn’t playing nicely in cross-platform environments.
Currently working in Golang & C (including developing bindings with CGO).
Good results with nerdtree, ctrlp, ctags, cscope, omnicomplete plugins for code navigation (with the relevant vocola macros for key-binding incantations).

But by far the most surprising experience was search and replace in multiple files without a plug-in:

# utility macros
EscWait() := {esc} Wait(200); # IDEs sometimes need delay between keypress
CmdWait() := EscWait() ":" Wait(200); # IDEs sometimes need delay between keypress
## search and replace in multiple files without plug-in
search with = CmdWait() "let @z=''" {left_1};
LoadReg() := Wait(200) {ctrl+r} Wait(200) "z" Wait(200);
replace in files = CmdWait() "args `grep -r -l '' .`" {left_4} LoadReg() {enter} CmdWait() "argdo %s///gc | update" {left_13} LoadReg() {right_1};

reference: this post

I’m finding this extremely useful and wanted to share.

Thanks again Mark and others for Natlink / Vocola.

accessing digital media around the house

Access to digital media

In recent years our standards of living have been closely linked to availability, quantity and quality of digital media. My recent mini project has been concerned with enabling me to access digital media around the house independently and irrespective of location. the following gives the objectives:
I want to be able to listen or watch media from a number of bearers including YouTube, iPlayer, iTunes as well as other website and my local media.
I want to be able to access this from my bed, my wheelchair and anywhere else I happen to be in the house.
I want to also be able to access this media through a number of devices: laptop/voice recognition, wheelchair controls/mobile phone, bedside switch/environmental control.
I want to be able to use my mobile phone through my lap top and my wheelchair controls.

I have mostly achieved the objectives through the following solution:

I have a wheelable TV stand supporting my 38″ Tv, two laptop docking stations, a raspberry pi minicomputer, a chromecast HDMI dongle and a hi-fi system connected to the TV via a Bluetooth receiver and transmitter.

All the TV stand connections are integrated within the stand, the laptops+, chromecast, raspberry pi communicate with the Internet and the home network over Wi-Fi and the TV connects with the hi-fi and speakers over Bluetooth. There is one cable connected to the TV stand, a single 240 volt power cable.

The premise is that everything transmits media to the TV which in turn plays audio via Bluetooth over the hi-fi.

Any device capable of supporting a chrome browser can stream digital media directly to the TV via chromecast. Therefore from a laptop when I’m using voice recognition of from my phone when in my wheelchair using my head control/wheelchair mounted environmental control to scan through iPhone icons via the Perrero module. also works well with YouTube iPhone application.

I can control my media centre, displaying through the TV, hosted on my raspberry pi, £35, minicomputer from my wheelchair environmental control or the single switch environmental control next to my bed. Through this I can access a host of digital media from the Internet or my local media server. This also provides a airplay interface which allows any device supporting iTunes to stream digital media directly to the TV (any Apple devices), another option from my phone or laptop.

The TV stand can be wheeled around the house still transmitting audio through the hi-fi via Bluetooth, connected through Wi-Fi to the Internet and the home network.

This allows me to spend less time asking people to connect/reconnect devices and more time getting on with life…

media streaming questions


some questions about media streaming /accessible technology I asked a well-known technical blogger…

I’ve been looking at home media streaming solutions for a while including things like xbmc on the raspberry pi, Apple airplay, squeezebox, plexserver … I’m basically looking to have some solution where TV, content clients, content server and speakers are communicating wirelessly and this is obviously possible, just trying to find the most cost-effective, coherent solutions supporting the largest number of content client applications e.g. iTunes/chrome browser/operating system audio subsystem/xbmc/plexclient.

My question to you is specifically about the availability of chromecast in the UK, this seems to suit my needs better than something like Apple TV (which, unless jail broken, I believe will not stream local area network hosted media and less through a iTunes library, not sure whether you could just run iTunes on the Apple TV pulling media from a shared network location) although CC may not be able to stream local media, I’m fairly sure I can find a way to use xbmc/leapcast to stream Lan hosted media to the chromecast.

My other question is about your opinion of run-and-read. due to my inability to physically turn pages or pushbuttons, I’m constantly frustrated by the lack of contact less page turning devices for ereaders, this device (which I have backed on Dragon) looks good and I could attach it to a headband to tap with my head on something to turn pages.

Have been playing with various devices for eyegaze control, voice recognition as always and iphone switch and voice control.


In-car command-centre

So, my solution which almost allows full use of the iPhone without any physical contact is as follows:
switch input from Manfrotto clamp attached to my wheelchair (which allows me to press switch with my head)
connected to Komodolabs iPhone interface box (which is paired with the iPhone over Bluetooth).
This allows scanning through visible fields on the iPhone with a press of the switch with my head. This can be used to perform swiping actions to unlock the phone, answer calls and dial etc
The switch mechanism does not allow use of the keyboard when searching for contacts (a bug in the software) and typing text is obviously slow with a single switch scan.
Happyfingers (PC> iPhone communication app) allows me to type text messages, search for contacts and make phone calls on my iPhone via my PC (with voice recognition) efficiently.
The restrictions with these actions are that they need to be confirmed on the phone, which I can now do with the switch input.

There are still a couple of problems to overcome, happy fingers relies on web access for both the PC and the iPhone concerned (to relay push notifications to the iPhone among other reasons), this is frustrating in the many 3G blackspots around where I live.
The second problem is that the happy fingers needs to be paired with the phone before voice-over is enabled (although this is not so much problem if you are aware of the solution).

October 2012 update

Busy trying to get environment facilities improved without the help of disabled facilities Grant (I am no longer eligible because of working and being married). Installing remote control (infrared not radio frequency) dimmer switches and door openers with help of a friend called Gordon and another called Jack. Hopefully going to significantly undercut the “possum” quote of nearly 2K pounds per door opener!!! Light switches are also hopefully going to be significantly cheaper than asking a specialist company to the job. The risks are, as with most ventures in this domain, associated with servicing costs which I’m going to have to stomach.
Still working on getting a suitable solution for “phone control from wheelchair”, preferably without using a laptop/computer. If using phone through computer, advantages are associated with using the familiar voice control input, disadvantages are mainly related to the lack of portability and dependence on computer. iPhone may be the way to go as accessible technology for android seems to be unfeasible because of platform variability which threatens mobile application return on investment due to maintenance costs and reliability issues. Windows mobile has the advantage of being most compatible with Windows operating systems but due to the success of android and iOS phones, maybe a sinking ship. iphone/pad interface device from rsl steeper seems to be the most promising prospect… Let’s hope it’s not too long in coming.
PCEye eye tracking, mouse control device is definitely highest next on the wish list, will improve IT usage productivity by about 30% I estimate. At the cost of around 2.5 K (a welcome reduction from three years ago).
Lots of travelling to and from London for social, on the way back from Glastonbury today for Louise Stewart’s birthday, what a nice place.
Looking forward to talking at Mascip (Google if interested)
off-road wheelchair waiting for confirmation and programming parameters on the settings supposedly hardcoded when factory configured. This will help with steering corrections.
brushing up on web development frameworks (django) to tie into python related skill set
Hobby project to help with filing system and remotely accessible personal document repository.
Looking into message brokering systems and related efficiency.

Working with voice recognition: Alternative PDF Reader

Using Mu PDF

MuPDF is a more keyboard friendly PDF reader than Adobe Acrobat reader.
The Adobe product is more flash/air based and therefore doesn’t respond as easily to keyboard control. It is geared towards point-and-click type usage and the voice recognition “say what you see” paradigm doesn’t work quite so well.
Very simple program which doesn’t seem to crash or have unexpected results, exactly what you need when trying to control with voice recognition.
</> Do page up/down 10 pages, very useful for quickly browsing through documents. can use m/t as in vim for Mark/trace back to previous mark, can also use a list of 10 marks e.g. m[0-9]/t[0-9].

One less frustrating program to deal with.

Hands-free Software Development, Pt I

I thought I would write a little bit about the way in which I develop software. I will try to explain the combinations of tools and the rationale behind the decisions. I will also note my opinions based on the experiences I have had. If anyone has anything specific they would like explained/elaborated upon please post a comment or e-mail whizz2000 at Hotmail dot com.

I rely completely on hands-free solutions, my main tool has been Dragon NaturallySpeaking. I use the most up-to-date version if I can because any performance improvement results in a significant time saving. I do not use the Dragon professional version, to create scripts like you can in the professional version, I use “NatLink” by Joel Gould I think. NatLink is a Python-based scripting extension that can be used to interact with Dragon.

I use Dragon as an input method in a variety of ways. The highest level (most powerful least flexible) being scripted commands for example “search Google for voice recognition” or a NatLink macro command “insert address one”. Obviously dictation is a very powerful feature of Dragon and the autopunctuation (which I usually turn off) can be useful for some people, I find dictation very accurate with Dragon 11.5 (as of writing).
The next level of input is using the “spell …” syntax, which allows you to spell out letters in quick succession. This is very useful when spelling words not in your Dragon vocabulary. Unfortunately with newer versions of Windows and browser-based text input this is not always possible. For example when using the start bar desktop search function in Windows 7, issuing the spell command changes focus out of the desktop search text field. A workaround for this is to “switch to spell mode” and then spell out the word. When using Firefox, if you use a custom chrome file you can remove the orange button in the top left which always opens when issuing a spell command. Even then the spell command can also take you back to the first tab, I’m sure there is another workaround using a chrome file but I haven’t had time to find it yet. You can again use “spell mode” or only a single tab per Firefox window. I always use phonetics when producing characters as it reduces ambiguity, increases accuracy and speed of recognition. It takes a while to learn them but I have found it worthwhile in the long run.
The next level of input is the “press …” command with which you can only issue single or combination keyboard presses (and mouse-keyboard combinations as of Dragon 11). This can be useful when the spell command will not work for some reason. I do a lot of work across different terminal types etc and for example using virtual machines or Remote Desktop/vnc sometimes the Dragon builtins don’t work.
The mouse grid input can be learnt from the Dragon documentation and is fairly intuitive. Learning to use this quickly is a great help but requires you to visualise coordinates in advance of uttering them. The Mac version does not allow you to issue multiple coordinates at once and therefore is unusable for me.

Next time, autohotkeys, putty and NatLink

Touchscreen with mouth stick or voice recognition, that is the question

Having seen the quality of the iPad 2 screen (very responsive), and the availability of the Dragon dictation application, I’m now looking to see if I can acquire one and a stand to use a mouth stick for various tasks. Time will tell whether this is as efficient as voice recognition. It’s likely that it will be more efficient for various tasks but not universally. Now to try to find the cash for one…

Anonymous FTP setup on Ubuntu

Debian anonymous FTP tips:
use a vsftpd.conf similar to that at the following site:
anon_chroot parameter is important, must point to a NON-WRITABLE root for anonymous


It seems that there are various possibilities for anonymous login credentials

including client specific settings (e.g. anonymous checkbox, I don’t know what this

actually sends to the server), the user “anonymous” or “”. Passwords can include “”,

any arbitrary string e.g. e-mail address or “guest”.

A specific criteria for anonymous users is that the anon_chroot directory that is the

root for the anonymous user is non-writable. I.e. owned by different user to that of

the anonymous FTP (ftp user for vsftpd). This does mean that the anonymous users

cannot create directories or write files to the root anonymous directory. They can

however to a subdirectory of this root directory (/var/ftp/pub in my case).

In the case where my anon_root=/var/ftp and there is a subdirectory “pub” within this

directory then the following configuration applied the necessary permissions:

(As root)
chown -R ftp /var/ftp
chgrp -R ftp /var/ftp
chmod -R 755 /var/ftp
chown root /var/ftp

This should remove the “vsftpd: refusing to run with writable anonymous root” error.

Example steps:

sudo apt-get install vsftpd
this will also create ftp user and group. On my Ubuntu setup this didn’t

create home directory.

The following steps setup alternative user home directory, autostart, edit the

config, create a backup file of the config and restart the Deamon.
51  usermod -d /var/ftp ftp
52  sysv-rc-conf –list
53  sysv-rc-conf vsftpd on
55  cp /etc/vsftpd.conf /etc/vsftpd.conf.bak
54  vim /etc/vsftpd.conf
56  service vsftpd  restart
Note: if using graphical FTP client, don’t forget to refresh, gets me every time.

After deciding I didn’t want anonymous uploading, I changed the chown parameter in the config to ftpsecure, another user with a password. I set chroot_local and disabled others users from FTP by adding them to /etc/ftpusers. I then added some symbolic links to the home directory of ftpsecure. This seemed to offer the right balance of security that I needed. I enabled chroot_local_user and disabled chroot_list_enable. Disabled anonymous_enable etc. Until deemed necessary later.
Remember to keep group and owner permissions of all contents of ftpsecure to itself otherwise may get problems trying to edit etc over FTP.