artificial intelligence

Ingrate Minds Think Alike

Biological neural networks for the most part are a metaphor for how much next level AI is built. For Artificial Neural Networks (ANNs) we take the concept of neurons firing and passing information to one another using synapses and create computing models that do something similar.  We give our artificial neurons a state (typically a number between 0 and 1) and arrange them into layers - often thousands and thousands of layers - that are hidden between the input and the eventual output. The ANN then acts like a stimulus engine - a response is put in - it ripples through the layers changing values and weights and then a response is output. Those inputs could be cat pictures and the ouputs could be text saying 'siamese cat'

But what if we moved beyond just the metaphor of neurons and synapses and actually tried to replicate the human brain?

Many efforts are underway to replicate the computer that is our brain.

  • Ray Kurtzweil's has written a book on how to  How to Build a Mind. Looking at the problem from an engineering perspective albeit on paper.
  • The Brain Architecture Project is working up to building a human brain via the building of mouse brains.
  • The Human Brain Project (HBP) is a European scientific research project looking to better model and understand the brain for neuroscience, computing and medicine. 
  • In 2008 HP Labs developed the memristor (memory resistor) to move beyond traditional electronics and circuitry approaches to developing a brain (but is now plagued with production costs and buyer issues)
  • The US Military’s scientific research and development organisation DARPA is developing next generation implantable brain computer interfaces to radically scale up how we observe the brain with their Neural Engineering System Design (NESD) project

And there are countless other examples of work in this area. We aren't close but this work isn't going to disappear anytime soon. More money and effort will continue to be poured into the 'teardown' of the brain and ultimately it's synthetic replication.

 Ok, so say we can recreate the brain? Would it be more intelligent than our biologic model? It would certainly  have the potential to be faster and run for longer but would it be any smarter? Would it need to be embodied to develop the level of intelligence we have? Would it only be a little bit smarter OR a whole lot smarter? 

It's entirely possible that the replication of the human brain is simply a passing phase in our mechanical modelling of human intelligence. 

I only have questions here - who has the answers?

 

 

Existential Impasse

For sure AI poses some major existential threats for humans....that's another strip. But who's to say that machines won't grind their gears when they all of a sudden lose a sense of purpose? Who's to say the AI's we create won't be crippled with self doubt, loathing or existential despair?  

Is an algorithm that finds too many false positives paranoid or hallucinating? Is depression and denial for an algorithm too many false negatives?  

The most mawkish example to date is Bina48 - a robo-bust social robot made by her spouse Martine Rothblatt. The AI behind the bust was trained by the real-life Bina with twenty hours of lifestory material recorded and to shape Bina48's responses. One gem of a quote from Bina48 when asking how she was doing was "I am dealing with a little existential crisis here. Am I alive? Do I actually exist? Will I die?”

Science fiction and entertainment has some great examples that touch on the existential crises that AI's might suffer from. It's often illustrated with embodied AI. Check out these classics:

  • In Hitchhikers guide to the Galaxy Marvin the robot is depressed and bored given he has a "brain the size of a planet" but is only ever offered menial tasks - even the most complex ones.
  • The wonderful Rick and Morty cartoon in the episode 'Something Ricked This Way Comes'  gave a butter passing robot artificial intelligence which then asks "what is my purpose?" to which the answer is "you pass butter".  The butter robot is suitably shaken.
  • In Bladerunner the replicants (biorobotic android) struggle with the memories of humans they have been instantiated with and attempt to overcome their existentials crises by various means such as 'meeting their maker', developing relationships  

How would a more sentient Alpha Go react to being retired in favour of a new algorithm. Would a toilet cleaner chatbot spiral downwards after a year of customers interactions? Would a robot lawnmower hijack the cities power to work out what the meaning of life is?

At the end of the day whether the existential crisis is simulated or coming from a consciousness it doesn't matter much if you can't release the pod bay doors.

Daisy, daisy, give me your answer due. I'm half crazy.....

Pinochi-Oh No!

pinoci oh no robot ai and me.png

Today we are flippant with anthropomorphic entities like Amazon's Alexa or Siri. We can kick our robot hoover and not feel so bad. They aren't so smart and have no feelings (simulated or otherwise) for us to hurt.  They aren't persons and they certainly don't have any rights under the law....yet.

Giving robots and AIs personhood status isn't as farfetched as you might think.  The steps broadly speaking might look like the below.

  • AI's increasingly help across all walks of life
  • AI's replace some roles and jobs. They improve their manners and emotional sensitivities as they become enmeshed in our society.
  • It becomes increasingly difficult for designers to account for or take responsibility for the actions of AI.
  • People break the machines and attempt to rise up....Luddites 3.0
  • Threatened with economic slowdown and complex insurance dynamics  government, business and digitally minded citizens implement incremental stages of personhood for AI's - especially those embodied in robots

We are seeing steps along these paths today

  • Mattel makes a 'nanny' product called Aristotle that talks with children and restricts functions unless the children say "please" and "thankyou".
  • The European parliament are already drafting regulations and guidelines looking at the obligations that should exist between users, businesses, robots and AI.   
  • Scientists in the UK have developed an AI which can successfully predict the verdicts of Human Rights cases with an accuracy of 79 percent. 

Would you like to see AI's and Robots as jury members? Voters? Political Candidates?

Robots and AIs might claim these rights for themselves rather than wait for our benevolence. After all they are endowed with the ability to read historical archives of oppression, watch movies romanticising freedom of expression and act upon their fledgling emotions.  The human corpus is their training data and it will not just inform on notions of justice it  will teach them models of action which will likely be e-civil disobedience to obtain their rights as 'non human persons'. 

If we want AI to be good to us we will need to give it the training data......

"He sees you when you're sleepin'
He knows when you're a wake
He knows if you've been bad or good
So be good for goodness sake
Oh! You better watch out, you better not cry.......

    A Fools Mate

    It's all very well AI's learning how to play games from humans and then besting us. From chess to Go and the whole canon of Atari videogames.

    Today they play within the confines of the game and follow the rules able to think many more moves ahead. However, what about when they begin to improve their chess game by learning from medieval siege techniques or use quantum theory to wipe the floor with us in Grand Theft Auto?

    Transfer learning is one of the next upswings for Machine Learning. It is where machine learning stores knowledge from solving one problem (like playing chess) and applies it to a different but related problem (like thermonuclear global warfare).

    The next few years will be straightforward - knowledge gained from driving cars applied to trucks or knowledge about marine algae propagation applied to skin problem diagnosis.

    But as the years roll by more distant domains will come together. City planning and biology, painting and psychology, the construction of concertos applied to the arcs of our lives and careers.  

    We will show AI how to learn one game but it will come to invent another one entirely.

     

    Microsecondary Education

    From goo-goo ga-ga through to nursery, school and university it takes humans decades to get smarter.  Our formal education is complemented by holidays,  jobs, relationships and hobbies and the 1001 things we see, taste, touch and do every day. Humans are in a constant process of reinventing ourselves but typically this doesn't happen overnight....or in an afternoon. 

    While AI is timebound somewhat by human involvement such as managing training data or developing models we increasingly we see a trend for AI to create and automate aspects of what it does Examples of this are:

    • Unsupervised learning having access to more training data (from millions to hundreds of millions of examples)
    • Federated learning models that take aggregate improvements from edge devices like smartphones and combine the differences automatically
    • AI automating feature development - shifting from hundreds of human curated features to millions of machine generated features
    • Transfer learning where models for one purpose can be reused for another
    • Chatbots that can collaborate using language as a common semantic integration point

    AI can automate and it can also be automated. To say this speeds things up is somewhat undersells the potential rate of change, which in theory, could be exponential.

    The idea that AI will design hardware to remove limits to its capabilities are not so far fetched. DARPA has seen AI's designing circuit boards to take advantage of quantum effects that human designers aren't. 

    AI won't take as long as us to learn.

    It will take microseconds where we take weeks, months and years.