The walkthrough method

We are asked to think on the following:

– What is the walkthrough method?

– What is the methodology of the walkthrough method?

– How would you carry out the walkthrough method?

 

We were introduced to the paper The walkthrough method: An approach to the study of apps by Ben Light, Jean Burgess and Stefanie Duguay [1].

 

The study of Apps and their sociocultural and economic effects is proposed and a formal methodology is described in this paper.

 

The environment of expected use and technical walkthrough are part of what is termed the Critical Technocultural Discourse Analysis (CTDA)  and includes forensically examining firstly, the environment of expected use.  This includes identifying the app’s vision, its operating model and its governance. The walkthrough process is to build a foundational corpus of data, starting with examining the app’s intended purpose, its cultural embedded meanings and to step through all the processes involved in registering the user to the app (if required). Further to this, the technical walkthrough would incorporate a data gathering procedure, not only registration but also everyday use of the app and how a user would go about leaving the app, closing an account if it has been opened and so on.

The walkthrough method uses interpretive techniques; Science and Technology Studies (STS) and cultural studies as a lens for app analysis. The walkthrough method as we use it is grounded in the principles of Actor-Network Theory (ANT), as a specific aspect of STS.

Within ANT, there are Intermediaries and Mediators – which in turn can be human or non-human. The intermediaries pass on meaning unchanged through a network of relations, while the mediators may transform meaning. An example in an app might take some information and suggest related things.. the example given in the paper was a dating app, having gathered certain like/dislikes – may suggest further likes to the user’s profile.

The way the app presents itself, its menu structure (in more playful apps this may frequently change) – the size of buttons, graphics, physical interaction gestures (e.g. swipe in Tinder), these all go towards a transformative action by the non-human mediator to affect change in the user.

What happens when the app is running (when removed from the user’s screen – or even, when machine is switched off)?

What happens when a user subverts the app by using it for a ‘non-intended’ use?

Consideration of affordances – again, is the app presenting itself to bias the user in their reactions?

What extra features/ changes occur over extended periods of use, not apparent in the initial walkthrough?

I am not clear on how best I would go about using the walkthrough method.

  1. The paper seems very anglo-centric, is it limited in its use? For example, how would this work in Japanese culture?
  2. Theory is presented well but the practice of the methods described need fleshing out

 

 

 

[1] The walkthrough method: An approach to the study of apps sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/1461444816675438

Wrestling With Olga

Theo and I assembled Olga in my garage… we have our assignment in projection mapping to complete.  – Much coding to do, more on this to come!

Please use the images below (right click save as to grab them)

IMG_2464
IMG_2465
IMG_2466
IMG_2468
IMG_2480
IMG_2481
IMG_2486
all
wireframealpha
wireframe

Readings: Computational Art and Ubiquity

why spoil things by the use of  this word in this context?

PigeonBlog’s birds had the potential to test these interpellation models.
interpellation
noun
a procedure in some legislative bodies of asking a government official to explain an act or policy, sometimes leading, in parliamentary government, to a vote of confidence or a change of government.”
Yes, this is an academic text but I have encountered too many of these ‘rare’ words that do nothing to help the reader. The first reading, Complex Ubiquity-Effects by Ulrik Ekman was even more densly packed with ‘difficult/unnecessary’ words.  How many times do you have to see the word “qua” – The ablative female? The formal style of writing is used to eliminate ambiguity in the meaning of the text, it is a pity the side-effect of this discipline is to increase the difficulty in deciphering and aiding comprehension.
This comes from our second reading this week, chapter 21 of

Tactical Biopolitics Art, Activism, and Technoscience, “Reaching the Limit When Art Becomes Science” by Beatriz da Costa ; much easier to ‘read’ and more accessible.

Not only the language she uses (apart from interpellation), but also the project “PigeonBlog” she describes was fascinating.

The use of pigeons carrying sensors at 300 ft (hard to access this altitude in any event): “The pigeon “backpack” developed for this project consisted of a combined GPS (latitude, longitude, altitude)/GSM (cell phone tower  communication) unit and corresponding antennas, a dual automotive CO/NOx pollution sensor, a temperature sensor, a subscribes identity module (SIM) card interface, a microcontroller, and standard supporting elec-tronic components.”

Da Costa reflects on the reaction to this project, which she states that it  became widely reported. What struck me as significant was her original strategic aim; “in situating itself between the academy and nonexpert participants” this pigeon project carries with it huge potential for other investigations, not confined to simply using pigeons with tech in order to survey pollution levels, but for many other projects, as signposted by the title of the book: Tactical Biopolitics Art, Activism, and Technoscience, “Reaching the Limit When Art Becomes Science.
The first reading, The introduction to Complex Ubiquity-Effects by Ulrik Ekman; very dense and powerful assembly of examples as to how the ‘third wave of computing’ is already underway – I have not read any of the essays in this book yet but Ekman has described the effects and change the explosion in use of RFIDs, multiple sensors, small portable devices, wearables, portable computing, data flows etc.. are changing how we experience the world. The lovely introduction I will include here… if only the text was more accessible!
Prelude
  • Three tourists stop in front of a boom coming down at the entrance to a nature reserve, reading on the little display on the boom that the maximum number of people permitted in the reserve has been reached at this point in time, this close to the breeding season.
  • A series of digital signposts and the GPS in the car lead the driver and his family down a set of side streets due to road repair and construction.
  • A media art installation embedded in the city square has dynamic and interactive video portraits appear on the ground in front of busy passers-by and makes them stop, play, and wonder how they were followed and picked out beforehand.
  • Every once in a while a 17-year-old son gets irritated at having to use his mother’s computer on the Internet—because he is quite frequently asked to consider buying new candles, bathrobes, bras, and women’s magazines.
  • An academic who gets home after a long day at work only vaguely notices that the lighting in the smart home is subdued a bit, the vacuum cleaner stops, and a string quartet replaces the pop songs from yesterday.”

24th Nov, Reflections after our discussion “The Non Human Turn”

#royals_as_robots

Walking home after seeing the last surviving (playing) member of Captain Beefheart and the Magic Band, 50 years after I bought ‘Safe as Milk’ – I pick up an Evening Standard and wait for my take away chips. I feel older still. The cover headline shrieks “ROBOTS SAVE 500 LIVES IN LONDON”

Ignoring for a moment perhaps the ideological motivatation to lull our fears of the privatization of the NHS, reading an old media ‘right wing’ and biased free sheet, I stop to think of the implications of the leader article; where does this fit into the absolute survival of the human race?

After the reading of Richard Grusin’s introduction I felt pessimistic. Does this give me cause for hope? I may have my cancerous prostate removed more accurately so I do not become impotent? It may be a concern for me. My discussion partner was optimistic, look how many improvements there are in the modern world.. healthier, longer lived lives… comfort, communication… etc. I reflected on this, I can only agree. BUT… At what cost? A minority of Londoners get freedom from impotence while 350 miles north of the capital, in Cumbria, hundreds are under water due to extreme weather events this week.

Ratko Mladić the war criminal gets his come uppance through better communication, coordination through our improved telecommunications and, Mugabe forced into resignation.

The Daily Mail spreads fake news of terrorist attack in Selfridges Oxford Street London. The guitarist of the Magic band is held up, nearly missing our concert.

But wait! Fake news spread by Daily Mail about Oxford Street – this very evening..

http://www.onenewspage.com/n/World/75eiq6j1u/The-Daily-Mail-Erroneous-Tweet-On-London.htm

and –

Fake News by Daily Mail

 

I am left with thinking – the Non Humans – out-act, out-implement consequences, beyond our control or comprehension-  but will they be a tool for our survival or the noose that finishes us off?

I suspect the latter because the stupid and lazy people (humans) outnumber the proactive thoughtful people. OK, not stupid or lazy but possibly victims of the corrosive side effects of the non-human.

Maybe some of the Magic Band audience would benefit from robot surgery, many of us will need it soon. Then, maybe the alien Annunaki will save us 😉

 

The Nonhuman Turn

The reading was Grusin, Richard. “Introduction”. In The nonhuman turn. University of Minnesota Press, 2015 and Chun, Wendy Hui Kyong. “Crisis, crisis, crisis; or, the temporality of networks.” The nonhuman turn (2015): 139-166.

Until now, I had not lifted the covers of any writings until this term and the course on Computational Arts-based research and theory. This week is no easier, we have been given two texts to read and summarise. The introduction by Grusin lays out the central theme of this collection of plenary addresses from the 2012 Nonhuman Turn in 21st Century Studies conference, The Nonhuman Turn. Grusin points to the meaning of the word ‘turn’ and how it is used in the context of non-human situations. The nonhuman is not restricted simply to machines but also to ‘animals, plants, organisms, climatic systems, technologies, or ecosystems’. He goes on to describe speculative realism; the first conference was held at Goldsmiths in 2007, when the group of speakers challenged the post-Kantian correlationism philosophy, as speculative realism, defining itself ‘loosely in its stance of metaphysical realism’ and working to overturn anthropocentrism, privilege of the human over non-human. Speculative realism favouring distinct forms of realism against dominant forms of idealism in so much of contemporary philosophy. He describes how technology has accelerated pace and intensity of academic discourse the ‘turn’ and noting that in 2012 it was estimated that 51% of internet traffic was non-human.

Wendy Hui Kyong Chun in her presentation argues that crisis both exceeds and is structurally necessary to networks. She contrasts the medium of television, citing catastrophe as central to commercial TV programming, whereas new media as a ‘crisis machine’. She examines the many ways how new media has affected us, how, they, as crises, cut across our rules, even creating new norms. Many problems arise from the sheer speed of telecommunications, ‘undermining the need for scholarly contemplation. ‘Crisis structures new media temporality’.

Donald Trump Twitter Chatbot and using Tensorflow

We understand that the old way of creating a Chatbot would have been to create a substantial repository of questions and possible answers to create the illusion of a conversation that could have been held between the user and the computer. This is static, labour intensive and not a self-learning system.

More recently (as we hear almost all the time) – even this morning, BBC Radio 4 announced that over Christmas, the Today show will be hosted by “AI”. Quite annoying that is so much in the news, however, for our purposed and for the objective of demonstrating AI as part of the “AI as a Tool for Art” topic to be presented at the end of term, I gingerly attempt to build/borrow/steal a working AI chatbot.

Without drowning in detail, a layered Recurrent Neural net is constructed as follows:

  1. Download a dataset (this could be movie subtitle files or twitter feed for example)
  2. Create a model
  3. Train the model
  4. Test the model

Tensorflow – using the python language, developed by Google seemed the most productive approach and I installed it first on my Windows machine but soon realised it was a lot easier to use Linux, so I use a Virtual machine to install all the necessary Python Libraries and with Github.com I used the code from suriyadeepan/easy_seq2seq to start my experimentation.

The example I chose was Mr. Donald Trump’s Twitter feed. I am already learning how to use seq2seq and will report back soon on my findings.

Week 2 of Art Based Research project

Timeline for Project at:

https://docs.google.com/spreadsheets/d/1wrPen6yY3jedji6wTFoX8n6p5K3wE0cZGJg-UUuI_9I/edit#gid=0

Boston Square mapping:
This is to help us think and graph out the relative positions of Art works denoting each work’s independence from the artist and on the other axis how much the work is machine-based.

x:axis Human – Machine
y-axis Machine Independence – Manual

Located at:

https://docs.google.com/drawings/d/1QGvAiW3K8ciyIUhXkNf9dSB1PAxXaP1d8BS_-mRMvJ0/edit

 

Powerpoint presenatation of my findings:

AI and Machine Art v1.0

 

EVERY THING AT ONCE

A very enjoyable exhibition of recent works held at StoreX in Aldywich, London November 2017, held in conjunction with the Lisson Gallery.

I particularly enjoyed Ryoji Ikeda’s installation, Test Pattern. It was example how far you can go with an immersive sound/light environment – this responded in real time to the movement of visitors in the space. I felt a bit queasey at the end, perhaps by brain was overloaded. A great experience, nonetheless.

I made a short video on my phone to record this and some of the other exhibits:

 

 

Algorithm

Last week we read the paper “Algorithm” by Andrew Goffey.

link to Goffey’s text

Goffey contributed this essay to explore different ways of defining the word. It is one of 36 essays in ‘Software Studies, a Lexicon’[1];

He starts the paper by offering the following definition of Algorithm (A.) “an unambiguous specification of how to solve a class of problems in a well-defined formal language [2] for calculating a function, for data processing or automated reasoning tasks”. Goffey (G.) heads up his paper with “Algorithm = logic + control.” [3]
Goffey points out that with A. there is not only a scientist/mathematician’s view of A. as a “theoretical entity … having an autonomous existence independent… of implementation details…having an autonomous independent existence… in (class) libraries…” but also, a software engineer’s view as A. as “A. having pragmatic efficiency to solve a particular problem”.
G. points out that it is not enough to define Algorithms as existing in a purely mathematical, abstract sense – for we cannot view A. in total isolation in the real world – for they affect and are affected by things around them. He goes on to point out specifically how A. are dependent on data and how A. and data are useless without each other. Also, G. points out, the purely ’scientific’ or ‘software engineer’ definitions do not adequately inform us about the social, cultural, and political role A.’s play.

G. approaches the definition from a philosophical standpoint; he describes programming languages as ‘syntactic artefacts’, they exist as machines – a series of steps to perform an actual outcome. G. goes onto illustrate the problem of bridging the purely theoretical definition of A. and the ‘real’ physical world, referring in his paper to Foucault.
G. proposes A. to be referred to as a (series of) statement(s), as Foucault terms it, in ‘Archaeology of Knowledge’ [4] . This is quite a dense work for this short summary, nevertheless, the statement used in the Foucault sense, when we think of this in A. context, would undermine the differentiation between the purely theory/practice definition. This returns us to the point that A. surely cannot exist in a void, independent of extrinsic factors.
The paper written in 2006 however does show how rapidly the world is changing; coincidentally, ‘Hinton, Osindero and The’ [5] published a paper in 2006, proposing a many layered neural network acting as an unsupervised machine, fine tuning it with back propagation.
This contrasts rules-based A. which is largely based on formal logic whereas the above paper heralds the onset of machine learning, giving computers the ability to refine their A. with no human intervention. Goffey does not bring this into his essay, a shame as he misses possibly the greatest leap in technology since the introduction of computers themselves.
The machine writes and re-writes its own A., with no human intervention. AI machines refining their own algorithms with the near impossibility of deciphering what/how they have done it, does pose us with new questions. With AI, there is slim possibility of understanding how the machine-created A.’s work – a black box. Does this pose a threat to humans – or possibly, will it save us and the planet?

[1]Software Studies a Lexicon Ed. Matthew Fuller Software Studies A Lexicon edited by Matthew Fuller The MIT Press Cambridge, Massachusetts London, England (2006)
[2]Rogers 1987:2
[3]Communications of the ACM CACM Volume 22 Issue 7, July 1979
[4]Foucault, Michel. 1969. The Archaeology of Knowledge. Trans. A. M. Sheridan Smith. London and New York: Routledge, 2002. ISBN 0-415-28753-7.
[5]Hinton, G. E.; Osindero, S.; Teh, Y. W. (2006). “A Fast Learning Algorithm for Deep Belief Nets” (PDF). Neural Computation. 18 (7): 1527–1554. PMID 16764513. doi:10.1162/neco.2006.18.7.1527.