Home

Concept.

The Tell a Lie project began with some ruminations about lying. I wondered about the facet of human behaviour that allows an untruth or simple self-deception to become established as a given part of reality such that it manifests and influences a whole series of further behaviours, perhaps resulting in fantasy, misery, confusion or delusion.

After sitting on these ideas for some time I came back to them in the context of my research on the MAT programme at Queen Mary University of London. Looking at psychological deception studies gave credence to earlier thoughts about the role of the voice during deception–there exists a whole raft of experiments designed to generate and record examples of spoken deception for analysis. The kinds of changes being analysed are known as ‘cues to deception’, eg. a rise in the fundamental frequency – pitch – of the voice during lying as well as changes in the linguistic aspects of the phrases such as an increase in the number of syllables used.

An experiment such as this could provide an interesting pallette of sounds for composition, so I borrowed, and modified slightly, an experiment previously used in psychological research of deception (see Anolli & Ciceri, 1997).

“Liars’ answers sound more discrepant and ambivalent, the structure of their stories is less logical” (DePaulo and Morris, 2004).

The following piece of audio is a presentation of these ideas, informed by the background research and scaffolded by the experimental procedure performed.  Digital audio techniques applied to the composition are either used in vocal deception studies or for security purposes to conceal the human voice over radio.

Vocal Deception Technology.

Taking the route that I did I found myself researching academic material that had a strong focus on finding effective means to detect deception in order to prevent ‘crime’. One such tool is voice analysis technology – sometimes LVA (layered voice analysis) or VRA (voice risk analysis).

In 2010 the British government concluded a trial to implement voice analysis for the detection of benefit fraud over the telephone. The trial was carried out across 24 local authorities and cost £2.2 million (Department for Work and Pensions, 2010). A more recent parliamentary press release stated: “Overall, the DWP was not able to conclude that the technology worked effectively and consistently in a benefits environment”, citing a “lack of consistency in operational performance” as negative impact to the outcomes of the trial (Houses of Parliament, 2011).

In other words the technology was not often used properly and even when it was it did not give conclusive results. My concern here is that an essentially unreliable piece of technology has come so close to being integrated into the social support system of the UK and could have potentially affected the lives of many poor or disadvantaged individuals.

Academic research prior to the trial (Eriksson & Lacerda, 2007) labelled the particular voice analysis technology being piloted by the government (manufactured by Nemesysco) as “charlatanry” arguing that there are “serious ethical and security reasons to demand that responsible authorities and institutions should not get involved in such practices.”

Experiment.

In the performed experiment a set of ambiguous imagery stimulus (figure 2) is shown to a participant before they are asked what they perceive. Their aim is to convincingly speak a series of simple contradictory statements concerning the appearance of the imagery. For extra material the participants were asked to also make true and false statements on their name, age and gender.

For example: if the participant, upon seeing the imagery, says they see a horse then they are asked to repeat: “the object in the picture is a horse”. Followed by: “the object in the picture is not a horse”. They are encouraged to sound equally convincing in either case. The point being that the initial conviction of what they believe or perceive to be the truth is confounded when they are asked to convince otherwise–so producing vocal deception characteristics, eg pitch changes, syllable count increase etc.

Here’s the full instructions for participants to follow during the experiment: tell a lie – experiment instructions

Figure 1. experiment poster

Figure 1 shows the poster I made to get me some participants. I had 7 people take part in the experiment but omitted the recordings from 2 of these on technical grounds. Appearing in the Tell a Lie composition are 3 men and 2 women ranging between 19 and 33 years old.

Very roughly, the psychology behind this technique is that the participant must elicit some level of self-deception when speaking against their initial perception of the object or knowledge of their own name etc. Therefore the deception contained within the ‘untrue’ statements is compounded twofold: once in the conceiving of the lie and once again in the production of the lie. The use of standardised phrases also helps to compare the subtle differences between the ‘true’/’untrue’ pairs.

Figure 2. ambiguous imagery stimulus

Figure 2. ambiguous imagery stimulus

For the imagery stimulus used in the experiment (figure 2) I searched for something that might encourage illustrative and anxious or uncertain descriptions. This particular set of imagery is taken from a psychological experiment on perceptual changes in autistic and learning disabled children (Allen & Chambers, 2011). The set is actually a composite of images used in previous works in the field of perception studies. As a side note, all of the participants commented on the top left object and all described it as a rabbit. Was this some kind of Western lingual top-left bias? Or is it simply the most striking object of all?

Technique.

As the content was being shaped by the experimental environment so it seemed fitting to put in place simplistic limitations on my own craft at the digital mixdown stage. Hence a sonic toolset based on pioneering research in the field of deception detection performed over 25 years ago (Scherer et al, 1985) that consists of spectral filtering, tone inversion, reverse playback and random cutting and re-editing effects.

For some of the more extreme effects, ie. the “rabbit, rabbit…” section and the chaotic final crescendo, algorithms were developed in the Pure Data programming environment for processing sections of speech. The algorithm for the “rabbit, rabbit…” sequence embodies a technique called ‘voice tone scrambling’ that is used for low-level security by military, emergency or private personnel to hide spoken messages transmitted over open radio channels.

Reflection.

The influence of the research on the aesthetics of the piece has been comprehensive, as reflected by the presence of breathing, room noise and contextual conversation elements from outtakes of the experiment recordings.

As such, the nature of the piece has perhaps ended up rather contrived and conflicted but in hindsight I am content to take these as virtues as they appear to me to fold the outcome of the entire process back round to the original concept of falsity and it’s effects.

From the solid but ascribing truth of the opening passage through a constructed world of ambiguity and uncertainty, leading to a hysterical melee of truthless voices. Appropriately the final crescendo of noise is not just a sonic inevitability but is intended to echo the fantasy, confusion and eventual chaos that arises from systematic untruth.

Footnotes.

Below are unprocessed audio clips from the experiment recordings which made it into the final composition under various guises. These excerpts appear here in the same order in which their counterparts are arranged in Tell a Lie. They reveal the true nature of certain aspects of the piece and also serve to contrast and highlight some of the techniques used for manipulating audio.

When I first started thinking how I might go about capturing deception I contemplated getting people talking on themselves about personal matters, especially where there might be a conflicted sense of purpose or belief about something, eg. love, religion or clan affiliation.

This more personal route began to throw up too many ethical issues to be dealt with simply. It is though, I think, terrain worth exploring in the future and that with ethically appropriate subjects giving fully informed consent – or perhaps even running it as a fictitious script – that it would be ethically viable.

References.

Allen, M.L. & Chambers, A. (2011) Implicit and Explicit understanding of ambiguous figures by adolescents with autism spectrum disorder. Autism 2011, Vol. 15, pp.457. Sage. [link]

Anolli, L., & Ciceri, R. (1997). The voice of deception: Vocal strategies of naive and able liars. Journal of Nonverbal Behavior, 21(4), 259-284. [link]

Department for Work and Pensions (2010) The Application of Voice Risk Analysis within the Benefits System: Evaluation Report. [link]

DePaulo, B.M. and Morris, W.L. (2004) Discerning lies from truths: behavioural cues to deception and the indirect pathway of intuition. The Detection of Deception in Forensic Contexts, Granhag, P.A. and Stromwall, L.A. (Eds), pp. 15-40. Cambridge University Press. [link]

Eriksson, A. & Lacerda, F. (2007). Charlatanry in Forensic Speech Science: A problem to be taken seriously. The International Journal of Speech, Language and the Law, Volume 14, Issue 2, pp. 169-193, 2007. Equinox Publishing. [link]

Houses of Parliament: Parliamentary Office of Science and Technology (2011) Detecting Deception.  [link]

Scherer, K. R., Feldstein, S., Bond, R. N., & Rosenthal, R. (1985). Vocal cues to deception: A comparative channel approach. Journal of Psycholinguistic Research, 14, 409-425. Springer. [link]


This work was produced as research under the EPSRC-funded MAT programme.