(Title Image: metaleater.com)

The future is now. Supposedly.Despite that, many independence arguments and discussions are stuck in the 19th century (or earlier), presented in very two dimensional terms. For any independence argument to be future-proofed, it has to take into consideration technological and scientific change as much as politics, culture and the constitution – hence this series of posts.

Some of those areas – including robotics and cybernetics – are at the cutting edge of modern technology. They present very delicate and complicated philosophical and ethical considerations that inevitably drag public policy into it.

Wales is currently at the forefront of at least one of those debates.

Modern Robotics: A Quick Guide

Firstly, it’s worth briefly defining what a robot actually is.

  • Robot – A machine pre-programmed to perform certain tasks which may or may not involve some autonomous decision-making.
  • Android/Gynoid – An autonomous artificial being designed to resemble a human in appearance and behaviour, with a heightened artificial intelligence and/or free will.
  • Cyborg/Cybernetic organism – A combination of organic/biological matter that includes technology that’s consciously-controlled (biomechatronics).
  • Mech/Mecha – A humanoid-looking machine controlled by a human from the inside (i.e. Titan the Robot).

Humanity has been trying to create semi-autonomous mechanical devices since Ancient Greece. The most famous example in pre-industrial history was perhaps Leonardo da Vinci’s plans for a mechanical knight – whether it was actually built or not is debated.

Although the word “robot” conjures images of humanoid science-fiction examples like Metropolis, Bender fromFuturama or Data from Star Trek, robots have primarily been developed as industrial tools, not with an end goal of creating and sustaining artificial life.

Robotics have been a key part of car manufacturing for decades. You can buy robots to do the vacuuming, while a new one on the market called Jibo functions as a home entertainment centre. If Google had their way, within a few years self-driving cars will be on American roads – tests are expected on public roads in the UK in 2015.

You’ll find robots in unexpected places too, notably the NHS. Some Welsh hospitals have robots working in pharmacies, others have “Dalek” cleaning robots to sterilise rooms. In last year’s budget the Welsh Government put funds towards the purchase of at least two Da Vinci surgery systems – which are used to carry out hysterectomies and prostate cancer treatments.

In terms of biomechatronics, artificial limbs have been developed that are consciously controlled. It’s seen a lot of focused research since the Iraq and Afghanistan Wars, where there was a large increase in “traumatic limb amputations” (here). TheiLimbartificial hand – developed by Scottish company, Touch Bionics – provides fine motor control which was previously unachievable with traditional prosthetics.

Cochlear implants have been used for decades to treat deafness, while retinal implants/”artificial eyes” are being actively developed along the same lines to treat blindness – to mixed success so far. Bionic implants could be used in the brain to treat conditions like Alzheimer’s and Parkinson’sdiseases by acting like a “pacemaker”.

In terms of education, I’m sure many of us who went to school in the 1980s and 1990s remember Logo– a simplified programming language used to control a device or draw patterns. I’m not sure if it’s still used or not. More advanced robots have been developed for medical and dental training.

You can’t discuss modern robotics without mentioning Japan. It’s said around 40% of all the robots in the world are in Japan, and they’re a leading research centre into artificial intelligence and the creation of humanoid-like robots. They’ve developed completely automated restaurants, robot hostesses and a (very disturbing) artificial mouth and nasal cavity to mimic human speech.

Closer to home, Glyndwr University recently announced theyre going to offer a Master’s degree in robotics based around the Baxter Research Robot. The University of South Wales also has a centre for robotics research, and back in April, computer science students were investigating the use of robotics to help people with cognitive impairment.

See, these posts aren’t as far fetched or random as they appear to be. The technology’s moving at such a pace that it’s claimed robots will be routinely carrying out household tasks by the 2030s and rivalling human intelligence by 2050….though it’s worth pointing out people have been saying the same thing since the 1980s, just pushing the dates back.

Roboethics

Robots are completely amoral and by definition “technology”. You can therefore argue that they’re no more subject to ethical considerations than a toaster, and all issues surrounding robotics are technical. But when you factor in potential human uses of robotics and biomechatronics, as well as the future of artificial intelligence, then there are very significant questions and issues.

Culpability and responsibility – If a self-driving car has a fatal accident, or if a Da Vinci robot goes all Hannibal Lecter, who (or what) is culpable? Would it count as manslaughter by the programmer, or an industrial accident? What if this was taken further and an android with pre-programmed free will commits a serious crime? Can that “individual” be punished on the same terms as a human? Or is the person who built the machine responsible?

Technological unemployment/Automation – Many jobs that could be considered menial or dangerous could be carried out by robots or computer programmes once the technology’s there. Robots don’t eat, don’t sleep, don’t take holidays and don’t go on strike. There’s a bigger up front cost, but after that the only considerations are routine maintenance, upgrades and spare parts. Technological unemployment is a process that’s already underway through computerisation, and it’s one overlooked reason why Wales lost most of its heavy industry jobs.

Skills that are currently in demand – like social-based “soft skills” – would go out the window if large chunks of human interaction were replaced with automated machines and algorithms. If lots of low-skilled customer service and consumer roles are automated (we’re already seeing it in supermarkets), roles traditionally taken up by larger numbers of women and students – especially those who prefer to work part-time – could become hard to come by.

A “post-work economy”? – Following on from the above, if more people are put out of work by automation and there are fewer jobs around, what would everyone do? How would class dynamics change?

The economy might – in the long term – create a “technical class” of engineers, mathematicians and scientists that would become dominant in an automated economy, along with those that bankroll it.

At a more fundamental level, we would all have to rethink what “work” is and it could go two ways.

The first is that it leads to much shorter working weeks, allowing people to devote more time to personal interests and providing the environment for things like culture, arts and sport to flourish, with the emphasis on personal improvement rather than material gain as most material needs will be taken care of.

But simultaneously, people would earn significantly less than they would working full-time. So an automated economy could require a mass redistribution of wealth from the centre to make sure everyone benefits.

Without that, the second scenario is more likely – mass unemployment and slump in living standards for those who’ve been left behind academically or replaced by technology. That’s a lot of people.

The routine use of automation and robotics could massively increase domestic productivity, yet at the same time result in little to no economic gain for the population at large. You could even consider it counter-productive in economic terms.

Use of robotics in medicine – Use of artificial prosthetics will remain uncontroversial. Using robots routinely for things like surgery, or even eventually basic nursing care, will raise questions about liabilities (as mentioned earlier) as well as safety standards and the levels of “trust” between patient and care “provider”. Dehumanising something as quintessentially social as health care will probably cause a lot of concern.

There’s also the issue of the use of machines to voluntarily enhance human abilities (transhumanism). Those human abilities could, however silly this sounds, be used nefariously – arms with super-strength, for example – especially if such things aren’t properly regulated and become available on the open market.

Artificial intelligence and free will – There’s a clear difference between a robot designed and built for a series of specific tasks, and an android that may or may not have free will or consciousness. As you might expect, a subject as obtuse as this has been explored in several episodes of Star Trek.

It looks like this one may have an answer (for now).

Physicists and mathematicians from the University of Wisconsin-Madison recently released research which suggests a completely sentient artificial being would be impossible at present. Current technology and computing can’t completely recreate consciousness because current/known computers can’t fully integrate information in the same way a brain can. An entirely new field of computing and mathematics would need to be created before we can think about creating androids with free will.

None of this means you couldn’t create a robot or android that mimics a human down to the finest detail – they’d just be an unconscious zombie. It might even be incredibly dangerous to attempt to create a human-like artificial intelligence, especially if that intelligence doesn’t like what it is….

A future movement for “Robot Rights”? – When people talk of a code of practice for robotics, they might point towards Issac Asimov’s famous “Three Laws”:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

That sounds like slavery : “individuals” owned as property, obligation of unquestioned self-sacrifice, denial of liberty. This won’t matter to a pre-programmed automaton; but if we end up way down the line with robots that possess some measure of free will, it’s not improbable that they might question their situation and demand equal rights or status in society. A robot/android could be significantly more advanced and intelligent than a chimpanzee – and know it – yet not have the same protection or rights.

Killer Robots?

As you would expect, robots are currently being developed for military applications (through the likes of DARPA), and there’s an entirely separate debate on the rights and wrongs of automated warfare, with Ceredigion’s Parc Aberporth a testing ground and research facility for unmanned aerial vehicles (aka UAVs, “Drones”). The British military already make routine use of automated vehicles too – bomb disposal, for example.

Drones and UAVs aren’t “killer robots”; they’re remote-controlled by a human being. So they’re not autonomous, not programmed to perform a task and don’t make any decisions. I don’t have an issue with the use or development of drones themselves – they have a multitude of very useful civilian purposes. I’d be more concerned about what they could become in the future.

I doubt it’s beyond the realms of possibility that in the short to medium term, drones will be developed – whether for military, policing or scientific purposes – that have a level of autonomy and, therefore, the ability to make a tactical decision that could result in death or injury to humans.

The ethical considerations here are whether the automation of warfare creates a psychological distance between combatants that would otherwise leave room for negotiation. It means less empathy for the enemy, no conscience, less questioning of decisions, and – if the technology’s there – more destruction.

Not sending sons and daughters to conflict zones and instead sending drones and automated defence systems may also make warfare a more palatable option for some leaders, when they would otherwise think twice due to the human cost. That’s OK if you have the technology, of course.

The timing of this post is quite fitting, as the mechanisation of warfare during World War One was as big a technological leap as the future automation of warfare would be.

Through the use of drones and air superiority alone, the United States could probably win a conventional war against many nations without having to put any boots on the ground. And some people in the UK still think Trident makes you a global power….

Then there’s the (more outlandish) prospect that unrestrained military robotics could cause what’s known as a “technological singularity” – where artificial intelligence improves to such an extent that a military robot or automated defence system disagrees with, and possesses the power to counterman, a human order.

Robo-Politics

Issues relating to robots and cybernetics are likely to be a mix between devolved and non-devolved – it depends on the subject. Something like UAVs or licensing new robots in medicine are obviously non-devolved; while routine use of robots in medicine, industry and funding the research behind it will probably be devolved.

There are several issues here that have the potential to be debating points in public policy in the future should Wales be independent, and – in some cases – even under the current devolution settlement.

According to the European Commission, robotics is currently worth around €15.5billion per year globally, and €3billion to the EU. The EU has funded some 120 robotics projects. It’s only likely to grow as the years pass, so there’s an economic imperative for Wales to start making a move here.

  • The issue of criminal liabilities relating to robotic medicine will need to be cleared up – probably in legislation.
  • Guidance on whether robots should be used routinely in direct patient care (jobs currently undertaken by nursing staff and care assistants).
  • There would need to be government guidance on the use of“artificial/cybernetic” enhancements – subject to powers over licencing medicines and medical treatments. This could include banning the use of bio-mechanical prosthetics that would enhance abilities beyond that of an average human, as well as bans on the sale and surgical application of such things privately.
  • Debate over the effect the development and widespread application of robots would have on the environment – in particular nano-machines – and what this would mean for population growth, abundance of consumer goods, the effect on population levels and the effect on energy supplies.
  • Continued debate over the ethics and use of robots in intelligence gathering and military operations. Wales could – if independent – ban the testing of drones or automated defence systems on Welsh territory, but that would come at an economic cost.
  • There might need to be enhanced labour and redundancy rights given to people who lose their jobs due to automation. There would also be room for debate on reducing the working week (to create more part-time jobs), the introduction of some sort of “citizen’s dividend” to make up for loss of household earnings, or even a tax on the use of robots/automated machines to do jobs that could be done by a human.
  • At some point in the distant future there would need to be debate on the status of robots that possess human-like levels of artificial intelligence.