Sensor City
A key insight into the nature of tech urbanism was provided by the statement by an automotive industry manager that in future the most important source of revenue for Mercedes would not be car production but vehicle -generated data.[1] Tesla started with the development of its operating system, not the chassis. The start-up Byton (Bytes on Wheels) recently struck a deal with the Taiwanese iPhone manufacturer Foxconn to produce a computer-based automobile. Google and Apple also see automotive systems, like smartphones, as hardware platforms which can be controlled with their open-source software initiative. ‘For some time now it has been clear that the drive performance of a car is becoming less and less relevant to purchasing decisions, while digital functions and the design of the interior are becoming increasingly important,’ says Timo Möller, Director of the ‘Center for Future Mobility’ run by the management consultant McKinsey.[2]
Digital companies deliver a finished operating system for modern electric cars, including cockpit displays, navigation with range calculation and voice control via Google Assistant, which carmakers can integrate directly into their vehicles. By means of the free operating system Android Automotive the digital enterprise is able to relieve the manufacturer of significant development costs. Using Google Assistant the car of the future will be able to communicate directly with the driver via a speech function. And thanks to the user date possibly stored – such as home and work addresses, restaurant preferences and shopping brands – Google can personalise the user interface following registration and thereby make it more useful for the driver.
Of course, all these services can be monetised. For Google, Android Automotive represents a source of considerable potential revenue. According to an estimate by the management consultancy A.T. Kearney, between now and 2025 the digitisation of cars will generate up to 100 billion euro in additional earnings. The electronics supplier Bosch is predicting that between now and 2030 the market for so-called software intensive electronics systems will grow by fifteen percent annually. Even today a new car contains 100 million lines of software code. Between 300 and 500 million are needed for the vehicle to be able to drive itself.
VW.OS
Firms refer to the almost indissoluble coupling of ever more things and services as ‘ecosystems’. Generating such ecosystems involves strengthening the ‘lock-in effect’ of a close customer connection to products, services and providers, which makes it more difficult for the customer to choose another product or provider due to financial charges and other barriers to switching. Practices long employed by Apple and Google/Alphabet with their omnipresent and endlessly scalable operating systems macOS or Android are proving difficult for ‘old’ industries to learn. Herbert Diess, chairman of the board of Volkswagen AG, characterises the current situation as follows: ‘Volkswagen has to transform itself from a collection of valuable brands and fascinating, combustion-driven products into a digital enterprise reliably operating millions of mobility devices all over the world.’ Volkswagen, he says, has to be able to provide not only the transport shell ‘but also the brain that safely steers the vehicle using artificial intelligence.’[3]
The share of software developed by VW itself – referred to internally as VW.OS – is currently just under 10 percent but should rise to more than 60 percent in the next five years. The software unit Car.Software.Org, which operates within the concern as an independent enterprise, is scheduled to employ a staff of 10,000 by 2025. Hardware components for the sensor system are also being purchased. Volkswagen wants to take over the camera-software division of the lighting and electronics specialist Hella and expand its front-camera development. ‘For example, Audi has just presented a car that has around twelve hundred sensors on board and nevertheless looks like a traditional Audi. There are cameras and long and short-range radar. It’s basically a giant computer on wheels that looks like a conventional car. [...] Here the digital world supplements the physical world. It also works the other way round – in any case the boundaries are blurred.’[4]
Devices in the Data Network
Along with the operating and sensor systems, the ‘ecosystem’ of the mobility enterprise is designed for the generation of information. ‘In terms of the collection of information and the networking of cars Tesla is already a few years ahead of us. Tesla is the only producer whose conception of a vehicle has seen it in terms of the software, as a device within the data web that collects customer data, evaluates data and rapidly reacts. [...] Audi has taken on an ambitious project in this regard, and our new personnel configuration is helping to accelerate the race to catch up. The ID3 is one step toward this goal.’ (Diess)[5] The reality is that the most recent Golf model is being recalled because of extensive problems in the software control of the infotainment system, which are resulting in a total failure of the navigation system or displays on the central monitor.
‘I would say that despite all our efforts we are in a more difficult position today than in 2018 when I became chairman’, Diess admitted to FAZ. ‘What has really changed – to a degree we did not expect – is the way the capital markets view our industry.’[6] Investors are placing less trust in the ‘old’ car industry when it comes to the transition to the networked electronic and robot vehicle than they are in the new technology enterprises. On the stock exchange Tesla is currently worth almost 570 billion euros while the combined value of German carmakers VW, Daimler and BMW is just 186 billion euros. Tesla and Apple thus have completely different financial frameworks that they can bring to bear on new platforms, customer contact and software expertise.[7]
Street View
Even before ‘smart’ cars, and before every self-driving vacuum cleaner or lawn mower became a street-, garden- or home-view vehicle, Google Street View’s vehicles, studded with sensors and cameras, were roaming the streets of so-called developed countries in order to take photos and record the position of open WLAN stations. The result was millions of panoramic images. At the same time the exact mapping of landscapes was being expedited for Google maps. This material is of fundamental significance for self-driving technologies, and it is for this reason that despite the failure of Nokia its mapping service HERE Maps, which was separated from the bankruptcy estate in 2013, continues to exist. The company HERE Technologies[8] offers mapping and location data as well as associated services for individuals and firms and changed hands for 2.8 billion euros. A consortium of German car-makers (Audi, BMW, Daimler) now controls the data.
The data are drawn from satellite views, traffic information and other location services, are updated every two to three months and serve ‘context-sensitive and self-driving vehicles’. For this purpose, location content such as street networks, buildings, parks and traffic patterns are also scanned using lasers. The company sells or licenses these map contents together with mapping-related navigational and positioning services to companies such as Alpine, Garmin, BMW, Oracle and Amazon.com. These third-party-provider licenses form the core of the company’s business. HERE also offers platform services its ‘Maps for Life’ for smartphones. However, in the 3D model of New Orleans ‘based on Nokia Here LIDAR data’ the French Quarter looks like it has just been through a civil war: fragmented, full of holes and empty of people.[9]
‘The cars are alive’
The machine-readability of the world synchronises digital mapping with AI-supported sensors. Movement within space makes cars into data collectors so that maps can be ever more quickly updated and unforeseen events such as pedestrians and other vehicles/objects can be avoided. With a ‘God’s Eye View’ over buildings or through fog, vehicles can already refer to sensor-based information from cars ‘in their vicinity’ in order to anticipate the infrastructure around them.
‘The cars are alive’ is the theme of a BMW iDrive advertisement that shows systems competing in a world where human beings are now outsiders. A petrol-driven model with an old man’s voice (‘Grandpa’) is challenged by the new, fully networked digital model (‘Toy Car’) which uses a female Alexa voice. As the development of the Internet of Things progresses, we shall soon not only see vehicles communicating with one another but also objects and organisms communicating with their sensor and radio-equipped environments in real time. The COVID app already allows for a data comparison with approaching people via Bluetooth. The new 5G mobile network is designed for much larger data volumes, greater distances and necessarily short reaction times in order, for example, to synchronise mappings of the environment in a far more complex way. The aim is for machines to communicate with machines and wearables so that people will no longer have to pay attention to their surroundings.
Visual Regime
In the context of their joint exhibition at the Center for Art, Design and Visual Culture, University of Maryland in 2013,[10] the photo artist and geographer Trevor Paglen interviewed the Berlin-based film essayist Harun Farocki about machines that only produce images for other machines: ‘We have a language to talk about the construction of images – whether they are patriarchal, or racist. We have a cultural vocabulary to analyze that kind of image. But I don’t think we have a cultural vocabulary to talk about what is the camera that is designed to track people around the shopping market and figure that out. There is a kind of political script to that, and there is an economic script that is controlling the camera that is very different from the political script of patriarchy or whatever produces a particular image of a woman.’[11]
Following Farocki’s sudden death in 2014 Pageln published an obituary in the online e-flux journal.[12] It begins with an account of the sudden realisation he had on first viewing Farocki’s Eye/Machine III, and installation created in 2003.[13] ‘Something new was happening in the world of images, something that the theoretical tools of visual studies and art history couldn’t account for: the machines were starting to see for themselves.’ Farocki, Pageln writes, was one of the first to discern the ‘new visual regime’ of image-producing algorithmic machines. He saw that these machines and the images themselves were beginning ‘to “do” things in the world’ – they were themselves becoming actors and making the human eye seem outdated. Farocki himself characterised this new functionality as the ‘act of cognition, recognition, tracking’.[14]
Eye/Machine draws on a range of industry and propaganda films, taking as its starting point the guided ‘suicide bombs’ deployed in the Gulf War in 1991. In this case the machine – i.e. the computer controlled ranged weapon – is designed to compare the ground surface scanned using video cameras and other sensors with its stored digital mapping data. This capability enables it to locate and collide with its target independently. The modern bomb – like the self-driving vehicle – does not require a human ‘steersman’. However, via Germany’s Ramstein Air Base the drones are connected with a ground station in the USA by a radio link in case the target coordinates need to be corrected in real time based on human observation on the ground, satellite imagery or the camera imagery transmitted by the drone itself.
The images recorded by cruise missiles, together with the green-cast and digitally enhanced recordings of the night sky during the 1991 bombardment, provided a propagandistic accompaniment to the Gulf War that was transmitted almost live to screens at home. ‘In videos shot from projectiles on their target approach, the bombs and reporters were identical. At the same time, the photographed and computer-simulated images were indistinguishable.’[15] The suicidal camera eyes, which dissolve into digital pixels and thus into dust at the moment of detonation, take all observers hostage, as it were; this experience was repeated with the release of the Wikileaks files that transformed us into captivated accomplices as we watched civilians being shot from a helicopter.
‘My mind is going. I can feel it’
In James Cameron’s science fiction movie Terminator 2: Judgment Day (1991) and the sequel directed by Jonathan Mostow, Terminator 3 – Rise of the Machines (2003), a war is waged by AI machines. These machines are controlled by ‘Skynet’, a cloud-like network with no central computer that has taken over the whole of cyberspace and developed its own consciousness. Human beings have no access to this network. The film in this respect echoes Stanley Kubrick’s 2001: A Space Odyssey (1968), in which the astronaut David Bowman has to battle the murderous, self-aware, learning on-board computer HAL9000. And HAL can also lie: ‘The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.’ The ‘Heuristically programmed Algorithmic computer’ is designed to be able to control the spaceship autonomously and, if necessary, to complete the mission without the crew.
Dave: Open the pod bay doors, HAL.
HAL: I'm sorry, Dave. I'm afraid I can't do that.
Dave: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Dave: I don't know what you're talking about, HAL.
HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.
Dave: HAL, I won't argue with you anymore! Open the doors!
HAL: Dave, this conversation can serve no purpose anymore. Goodbye.“
As with Siri or Alexa today, the verbal dialogue fails because of the AI’s obstinacy. However, Bowman manages step by step to shut down HAL’s higher functions manually.
Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. lab in Urbana, Illinois on the 12th of January, 1992. I'm afraid. I'm afraid, Dave. Dave, my mind is going. I can feel it.
A New Policy on Images
In 2005 Stephen Gaghan’s feature film Syriana showed us how fully automated weapon-machines can be guided to their target subject by, of example, the signal from a mobile phone: ‘Again, later, all of us in the cinema will adopt this viewpoint. It is the moment in which the CIA is about to carry out a murder, the gaze from a satellite camera at the target of a deadly rocket, a motorcade driving through the desert. It is almost like the viewpoint Google Earth had been making available to everyone for several months – the gaze of God on the world, perhaps before sending the Flood. The camera, which is somewhere in the CIA headquarters in Langley, Virginia, can zoom in close enough for us to make out sheep and the Bedouins herding them. At some point we get the information “target destroyed”. Then we get the viewpoint from below.’[16]
Harun Farocki investigates a machine-read, machine-readable world of sensors and computers in motion: ‘It has been said that what was brought into play in the Gulf War was not new weaponry but rather a new policy on images. In this way the basis for electronic warfare was created. Today, kilo tonnage and penetration are less important than the so-called C3I cycle which has come to encircle our world. C3I refers to Command, Control, Communications and Intelligence – and means global and tactical early warning systems, area surveillance through seismic, acoustic and radar sensors, radio direction-sounding, monitoring opponents communications as well as the use of jamming to suppress all these techniques.’[17]
It is only the ‘viewpoint from below’ that makes clear the destructive effects wrought by these weapons-equipped image-machines – if one can hold together its meanings as they are catapulted around the globe.[18] Syriana is ‘the rare case of a new American film that tests and often overtaxes the cinema-goer’s powers of comprehension. There is no clearly defined central character, no master narrative and no sympathetic characters to which one could cling. The settings are spread halfway around the globe.’ [19]
Comments