Tags: academic, philosophical
I have been invited to write a book chapter on lexical choice for translators (contact me if you want to see a preprint). To get acquainted on this audience different from my usual computer science I read a few papers on professional translators use of technology. Two of them are quite interesting and I recommend them not only because they make for a good read and they have implications outside translation: Translation Skill-sets in a Machine-translation Age by Anthony Pym (2013) and Is Machine Translation Post-editing Worth the Effort?: A Survey of Research into Post-editing and Effort by Maarit Koponen (2016). This search finished by reading a short ebook by researchers at the MIT Center for Digital Business titled Race Against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy. In that book plus the papers there's this call for humans, if we want to remain employed, to hybridize our work and to seek out ways to work with the computer as some sort of partnership. That process is clear in human translation: checking from previously translated similar sentences or the output of machine translation (instead of creating new translations from scratch).
The question is then what about our trade? What it means to be working on a partnership with the computer rather than for the computer? As other people, I have argued that machine learning (more specifically supervised learning) is akin to traditional programing (in the old soft computing style). It follows many of the pros and cons of the redefined labor of the human translators.
But that's not all. Other areas of programmer / computer partnership that are less deployed (but nonetheless quite explored in the scientific space) are declarative programming techniques for both program verification and program synthesis using automatic theorem provers. The idea here is that instead of writing test cases you write test cases generators and the property checkers for the output of your programs over those generated test cases. I have experience with the Haskell library QuickCheck2 and it's quite pleasant to use (Thanks to http://www.cs.mcgill.ca/~fferre8/ teaching me how to use that library, gracias che!). There are now similar libraries for other programming languages. How can this be described as a programmer / computer partnership, you might ask? At the end is just another test framework. The difference is in the type of task the human is doing (enunciating properties) and the computer (doing the grunt of checking the said properties). Traditional unit testing has much more grunt work on the side of the programmer.
That focus on overall properties rather than the code behind it bring us to the hope of automatic programming using theorem provers. There has been some massive improvement in theorem prover capabilites using general SAT solvers in recent years. Maybe it's time this new technology start finding its way into the desktops of professional developers.
Now these skills are different from regular developers. The same can be said from machine learning. Many great practitioners in machine learning ("data scientists") are average / poor developers but come with backgrounds in engineering or science that makes them thrive in an extended programming task considering supervised machine learning as programming. It reminds me of the fact (as brought by Race Against The Machine) that the best chess players in present times are neither humans nor computer but a thriving partnership of not necessarily the best humans nor the best computers.
Borrowing a page from the experience of human translators, there'll be a time when painstaking 100% human created programs will be deemed too expensive for most but few mission critical situations. And the rest will be created by a redefined computer professional. At this stage this is a mental exercise but given the example from human translators, definitely an exercise worth engaging.