Courtesies Among The Machine Classes

We are building our smart machines and interfaces to be courteous when they interact with us. That doesn't mean they won't drive us over a cliff, snap our arm with a handshake, or invest our money horribly in the stock market but at least they will be cordial as they do so.

There is some interesting progress here especially in the domain of natural language:

 

Reinforcement learning is a good method of teaching machines turning morals and behaviours into a game for machines to learn.   The Office of Naval Research in the US is working with researchers at Georgia Tech to program robots with human morals using a software program called the “Quixote system.” Of course we have the whole human corpus in the form of books, films and the web as training data but how do we make sure that they use childrens TV and not the horror of humanity presented on the nightly news?

Further, building that courtesy into the machines shouldn't all be one way. As we interact more with these machines in increasingly deeper ways we need to make sure we don't lose important aspects of our humanity.

But what about courtesies between machines? Does the concept even make sense and if so then what might the value of being polite be in machine society?  

Maybe like us they will judge one another based on some social stratification. Simulation models designed to imitate us in the form of commerce dynamics use models that factor in social class and strata into interaction dynamics for each AI agent developed.  See Multi-agent simulation of virtual consumer populations in a competitive market But these are human models and values applied to agents representing and for the most part not autonomous agents acting in the real world.

However, these simulations are starting to creep out into the real world. Simulations offer the chance to test high volume training in a virtual world and then translate that learning to our physical world. Driverless cars is one example here with Grand Theft Auto already being used to do just that. Courtesy, however, is up for debate here. Some models actively factor in courtesy for lane changing and parking protocols but many consumers fear that human courtesy will be abused by the machines - seizing on this inefficient fraility -  putting an end to road etiquette altogether.

In the mid term the machines will use our models of courtesy but in the long term they will likely develop their own models making us like tentative explorers trying not to upset the natives every time we talk to them.   

Intergarden Biohazard

Nanotechnology ("nanotech") is manipulation of matter on an atomic, molecular, and supramolecular scale. It's super tiny computing in the realm of 1 to 100 nanometers.  The theoretical physicist Richard Feynman in his talk 'There's Plenty of Room at the Bottom' seeded a whole lot of activity in this area. Feynman lectures are legendary - check them out here

Nanotech promises new possibilities for energy consumption, a cleaner environment, wondrous health applications and reduced costs while doing so. Nanotech is small, cheap, light, highly functional and requires much less energy and materials than traditional manufacturing. It's in use today for materials and coatings, drug delivery and medicine, enhancing the flavour of food and in electronics design.

However there is one branch of nanotech that gives us the major fear - self replicating nanotechnology. When nanotech self assembles things can get out of hand pretty quickly and one memorable illustration of this is called the 'grey goo' hypothesis. It's where out of control self replicating nanotech robots consume all the biomass on earth for raw materials to build more and more of themselves turning our lovely green planet, and us, into grey computing slop. Grey goo is the ultimate boundary breaker. 

Technology continues to challenge our sense of boundaries:

  • what is public and what is private?

  • where does work end and personal life begin?

  • should we afford rights to smart machines?

  • does data derived from data need the same ownership and privacy rights? 

Today, robotics, chatbots, drones, social media and mixed reality are all pushing our ideas of boundaries. Nanotech too will force us to reassess the gaps and layers between our native physical world and the synthetic one we intersperse with it.

Prepare for debates on keeping the ammonia eating nanotech inside the nappy bin, keeping the dead skin eaters confined to our own bodies and maybe even keeping our nanotech lawn from shutting in the neighbours.

 

 

Existential Impasse

For sure AI poses some major existential threats for humans....that's another strip. But who's to say that machines won't grind their gears when they all of a sudden lose a sense of purpose? Who's to say the AI's we create won't be crippled with self doubt, loathing or existential despair?  

Is an algorithm that finds too many false positives paranoid or hallucinating? Is depression and denial for an algorithm too many false negatives?  

The most mawkish example to date is Bina48 - a robo-bust social robot made by her spouse Martine Rothblatt. The AI behind the bust was trained by the real-life Bina with twenty hours of lifestory material recorded and to shape Bina48's responses. One gem of a quote from Bina48 when asking how she was doing was "I am dealing with a little existential crisis here. Am I alive? Do I actually exist? Will I die?”

Science fiction and entertainment has some great examples that touch on the existential crises that AI's might suffer from. It's often illustrated with embodied AI. Check out these classics:

  • In Hitchhikers guide to the Galaxy Marvin the robot is depressed and bored given he has a "brain the size of a planet" but is only ever offered menial tasks - even the most complex ones.
  • The wonderful Rick and Morty cartoon in the episode 'Something Ricked This Way Comes'  gave a butter passing robot artificial intelligence which then asks "what is my purpose?" to which the answer is "you pass butter".  The butter robot is suitably shaken.
  • In Bladerunner the replicants (biorobotic android) struggle with the memories of humans they have been instantiated with and attempt to overcome their existentials crises by various means such as 'meeting their maker', developing relationships  

How would a more sentient Alpha Go react to being retired in favour of a new algorithm. Would a toilet cleaner chatbot spiral downwards after a year of customers interactions? Would a robot lawnmower hijack the cities power to work out what the meaning of life is?

At the end of the day whether the existential crisis is simulated or coming from a consciousness it doesn't matter much if you can't release the pod bay doors.

Daisy, daisy, give me your answer due. I'm half crazy.....

Pinochi-Oh No!

pinoci oh no robot ai and me.png

Today we are flippant with anthropomorphic entities like Amazon's Alexa or Siri. We can kick our robot hoover and not feel so bad. They aren't so smart and have no feelings (simulated or otherwise) for us to hurt.  They aren't persons and they certainly don't have any rights under the law....yet.

Giving robots and AIs personhood status isn't as farfetched as you might think.  The steps broadly speaking might look like the below.

  • AI's increasingly help across all walks of life
  • AI's replace some roles and jobs. They improve their manners and emotional sensitivities as they become enmeshed in our society.
  • It becomes increasingly difficult for designers to account for or take responsibility for the actions of AI.
  • People break the machines and attempt to rise up....Luddites 3.0
  • Threatened with economic slowdown and complex insurance dynamics  government, business and digitally minded citizens implement incremental stages of personhood for AI's - especially those embodied in robots

We are seeing steps along these paths today

  • Mattel makes a 'nanny' product called Aristotle that talks with children and restricts functions unless the children say "please" and "thankyou".
  • The European parliament are already drafting regulations and guidelines looking at the obligations that should exist between users, businesses, robots and AI.   
  • Scientists in the UK have developed an AI which can successfully predict the verdicts of Human Rights cases with an accuracy of 79 percent. 

Would you like to see AI's and Robots as jury members? Voters? Political Candidates?

Robots and AIs might claim these rights for themselves rather than wait for our benevolence. After all they are endowed with the ability to read historical archives of oppression, watch movies romanticising freedom of expression and act upon their fledgling emotions.  The human corpus is their training data and it will not just inform on notions of justice it  will teach them models of action which will likely be e-civil disobedience to obtain their rights as 'non human persons'. 

If we want AI to be good to us we will need to give it the training data......

"He sees you when you're sleepin'
He knows when you're a wake
He knows if you've been bad or good
So be good for goodness sake
Oh! You better watch out, you better not cry.......

    A Fools Mate

    It's all very well AI's learning how to play games from humans and then besting us. From chess to Go and the whole canon of Atari videogames.

    Today they play within the confines of the game and follow the rules able to think many more moves ahead. However, what about when they begin to improve their chess game by learning from medieval siege techniques or use quantum theory to wipe the floor with us in Grand Theft Auto?

    Transfer learning is one of the next upswings for Machine Learning. It is where machine learning stores knowledge from solving one problem (like playing chess) and applies it to a different but related problem (like thermonuclear global warfare).

    The next few years will be straightforward - knowledge gained from driving cars applied to trucks or knowledge about marine algae propagation applied to skin problem diagnosis.

    But as the years roll by more distant domains will come together. City planning and biology, painting and psychology, the construction of concertos applied to the arcs of our lives and careers.  

    We will show AI how to learn one game but it will come to invent another one entirely.

     

    Microsecondary Education

    From goo-goo ga-ga through to nursery, school and university it takes humans decades to get smarter.  Our formal education is complemented by holidays,  jobs, relationships and hobbies and the 1001 things we see, taste, touch and do every day. Humans are in a constant process of reinventing ourselves but typically this doesn't happen overnight....or in an afternoon. 

    While AI is timebound somewhat by human involvement such as managing training data or developing models we increasingly we see a trend for AI to create and automate aspects of what it does Examples of this are:

    • Unsupervised learning having access to more training data (from millions to hundreds of millions of examples)
    • Federated learning models that take aggregate improvements from edge devices like smartphones and combine the differences automatically
    • AI automating feature development - shifting from hundreds of human curated features to millions of machine generated features
    • Transfer learning where models for one purpose can be reused for another
    • Chatbots that can collaborate using language as a common semantic integration point

    AI can automate and it can also be automated. To say this speeds things up is somewhat undersells the potential rate of change, which in theory, could be exponential.

    The idea that AI will design hardware to remove limits to its capabilities are not so far fetched. DARPA has seen AI's designing circuit boards to take advantage of quantum effects that human designers aren't. 

    AI won't take as long as us to learn.

    It will take microseconds where we take weeks, months and years.