M1: Unit 2: Chapter 12 – Computerization

« back to unit 2 » next chapter

Chapter 1: Introduction Chapter 6: Systemic Position Chapter 11: Self-reflection
Chapter 2: The Scientific Model Chapter 7: Norms Theory Chapter 12: Computerization
Chapter 3: Theoretical and Descriptive Branches Chapter 8: Norms Critique Chapter 13: Laws
Chapter 4: Applied Branch Chapter 9: Other Sources of Explanation Chapter 14: Conclusion
Chapter 5: Target Orientation Chapter 10: Objectivity Chapter 15: Bibliography

Chapter 12: Computerization

1. Granted the issues of objectivity and self-reflection explored in chapter 10 and chapter 11, and although Toury’s practice does not indicate it, there may be ways of increasing consensus and supporting claims of regularity, in other words, attaining ‘scientific’ objectives as far as possible, in a certain context.  Evaluative and interpretative elements would have to be eliminated from a description as far as possible.  The way this could possibly be done would be to focus on linguistic categories and resort to the use of computers.  The use of computers also allows for the treatment of large quantities of data, which would enable the accumulation of data required for strongly supporting descriptive generalizations, perhaps even aiming to establish ‘laws’ with respect to certain issues (although the notion of ‘law’ has been seriously challenged, see chapter 13).

2. Corpus linguistics is a discipline which involves manipulating large computerized corpora.  Baker in 1993 considers that hitherto in translation research there had been an excessive emphasis on individual translations in isolation. Toury’s approach, in contrast, assumes that the primary object of analysis in translation studies is not an individual TT but a coherent corpus of TTs (Baker 1993, 237-240).  Baker states that the methods and tools of corpus linguistics could be used for implementing DTS objectives, since developments in corpus linguistics can provide solutions for translation researchers’ problems by providing ways of “overcoming our human limitations and minimising our reliance on intuition” (Baker 1993, 241).  Baker suggests that in order to become a ‘truly scientific’ discipline, DTS must use the tools which contemporary technology offers us:

translation studies has reached a stage in its development as a discipline when it is both ready for and needs the techniques and methodology of corpus linguistics in order to make a major leap from prescriptive to descriptive statements, from methodologising to proper theorising, and from individual and fragmented pieces of research to powerful generalisations. (Baker 1993, 248)

3. Baker is critical of DTS in its current state (1993) because of the smallness of its corpora and its ‘manual’ techniques of study.  Baker suggests that translation researchers may be reluctant to avail themselves of larger corpora, quoting Vanderauwera: “serious and systematic research into translated texts is a laborious and tiresome business”, and Toury himself:  “the larger and/or more heterogeneous the corpus, the greater the difficulties one is likely to encounter while performing the process of extraction and generalization” (quoted in Baker 1993, 241) Elsewhere Toury says that we must be content for the moment with our intuitions in the tasks of selecting a corpus and in developing ideas, although he does also say that we should work towards the crystallization of systematic research methods, including statistical ones (Toury 1995, 69). (For an account of the state of corpus-based translation research today, see module 2).

4. The use of computers does not necessarily entail huge corpora as in corpus linguistics. Rui Roth-Neves proposes the use of computers on a smaller scale in order to obtain more data in support of initial manually established descriptive findings (Roth-Neves 2000, 3). Whether the studies are on a large or smaller scale, computers can readily provide the answers to certain kinds of issue in translation research.  Computer software is equipped to deal with tasks such as measuring expansion and reduction of texts, recognizing syntactic patterns, searching for particular words and phrases and recording the immediate co-texts of such words, identifying repetitions, identifying punctuation and typographical patterns, counting the number of words per sentence, and counting occurrences of words or phrases.  Computers cannot by themselves comment on issues of meaning and produce an interpretation of stretches of text in a given co-text.  It can be argued that exclusive or near-exclusive reliance on computers would limit the field of study to certain types of question, and that these questions are amenable to the scientific goals of intersubjective consensus, replicability, and quantification.

5. Although Toury’s objective of scientificity strongly points to the use of computers in order to undertake efficient quantitative research eliminating evaluative elements as far as possible, Toury does not in practice take his objective to its logical conclusion, since he does not adopt Baker’s proposed methods.  There seems to be a discrepancy between Toury’s theoretical assertions of goals and method, and his actual practice.  In his practice there is an implicit admission of the desirability of retaining interpretative and assessment aspects in translation research.  Toury’s practice recognizes that linguistic features upon which intersubjective consensus can be reached and replicability performed, such as sentence length or grammatical differences, are not interesting in themselves. What is interesting is the effects of such linguistic features in texts, but that requires interpretation and assessment of effects on the part of the researcher.  Consensus in such translational matters will only ever be relative.

6. In this perspective computers can “[serve] as an aid to, and not a substitute for, human analysis” (Munday 1998, 547). Computers can provide an excellent tool for researchers in Translation Studies.  Jeremy Munday proposes a “computer-assisted approach” to the analysis of translation shifts, where the basic tools of corpus linguistics enable accurate and rapid access to surface features over a whole text, reducing the arduousness of what has previously been a manual task.  Once the computer has done the ‘hack work’, the researcher can then use his or her powers of analysis and interpretation to produce findings (Munday 1998, 552). In other words, a quantitative stage (computer generation of findings) is followed by a qualitative stage (researcher’s interpretation of the findings).

> Questions