By admin
Posted Date : April 7, 2019

At the DARPA conference in Alexandria Virginia recently, scientists and researchers met to take the pulse of what artificial intelligence (AI) has achieved and the challenges still facing them. Its an exciting time but still an enigma. Are we on the right track? Are our assumptions correct? Are the goals realistic? 

For the average person, AI  is hyped as the next transformation of human knowledge into a digital paradigm. The idea is to harness the power of computing through software to build automated, self-adapting information systems to assist in  tackling problems in diverse fields: cosmology, quantum physics, genetics, protein folding, robotics, environment, social behavior; the list of useful areas for using AI is expansive .

Artificial Intelligence – the AI part of the equation, and Intelligent Automation – the IA part, are inextricably linked. AI serves IA and in return improves itself by studying outcomes.

Philosophers have tried to formalize human thinking in terms of symbolic representation – a formal calculus of how we see, describe and understand the physical world we experience. The fact that human consciousness has many descriptions is a result of how scientific methods and tools have evolved over the centuries. Many assumptions have been found to be primitive or inaccurate as we delved into how the human brain operates. It’s a moving target and herein lies the problem; is intelligence and how we think still a subject that requires further formal definition?

MIT scientist Marvin Minsky[1], was an early contributor (1956) as to how we define the artificial part to intelligence, a process requiring little or no human supervision. He thought that within a decade, creating ‘artificial intelligence’ would substantially be solved; a singularity in which artificial intelligence rivaled human intelligence. Sixty plus years later, that solution still evades us. The fact that we tend to conflate AI with intelligent robotics only made the subject more opaque.

“A year spent in artificial intelligence is enough to make one believe in God.”-
Alan Perlis

Today, those that work in areas such as pattern recognition, language translation, imaging and semantics have created multiple tools to solve their particular problem set. Many AI tools come from existing methods such as support vector machines (SVM), Markov chains (probabilistic), neural networks (machine learning), symbolic logic and search (human readable), genetic algorithms (fitness), stochastic models (gradients), and many more. A data scientist would understand which tools are appropriate for the subject matter. Thinking that AI means one thing is inappropriate.

How We Learn
The brain creates and recalls memories, what it remembers in short/long-term memory, and how it functions to invent new stuff. This seems to be a fundamental assumption in creating machines to operate in the same way .

From a digital perspective, we apply this model to a computer centric equivalent. Mapping the brain model to a computerized digital model was seen as a major milestone. In 2007 it was thought that researchers had met success. Scientists in Switzerland, working with IBM, showed that a computer simulation of the neocortical column, the most complex part of a mammal’s brain, behaved like its biological counterpart .

Researchers said these results suggest that an entire mammal brain could be completely modeled within three years, and a human brain within the next decade. Ten years later we are still not there. Still waiting. Perhaps this was the wrong approach.

As humans we see ourselves at the top of the intellectual food chain, with single cell organisms at the base. Since we are a composite of cells, each with a defined purpose, with the brain acting as a master operator or conductor, our hubris way of thinking has allowed us to view AI as a machine equivalent of how we think. Perhaps this is a fundamental misdirection.

Consciousness – the awareness of self – exists in many forms. According to Nobel winner Jim Allison, the immune system of mammals assigns capabilities to warrior cells, like B and T cells, to attack foreign bodies to protect the organism. They don’t attack the body unless tricked through antigens created by cell reproduction gone amuck. Cancer is a classic example.

Edwin Wilson examined social behavior of ant colonies and concluded that through group-level selection — favoring the survival of one group of organisms over another —evolution brought into being the many essential genes that benefit the group at the individual’s expense. In humans, these may include genes that underlie generosity, moral constraints, even religious behavior.

Richard Dawkins proffered that digital information in a gene is immortal and is the primary unit of selection. No other unit shows such persistence — not chromosomes, not individuals, not groups and not species.

Consciousness
So, what is consciousness to begin with and how can it be emulated in a digital way? David J. Chalmers[6] in 1995 dissected this term into what he calls ‘easy’‘ and ‘hard’ problems. “The easy problems of consciousness are those that are susceptible to the methods of cognitive science, where the phenomenon is explained in terms of computational or neural mechanisms. The hard problems are those that seem to resist those methods.”

Fair enough. Easy, in his approach, requires:
• the ability to discriminate, categorize, and react to
environmental stimuli;
• the integration of information by a cognitive system;
• the reportability of mental states;
• the ability of a system to access its own internal states;
• the focus of attention;
• the deliberate control of behavior;
• the difference between wakefulness and sleep.

Easy problems are so because they focus on cognitive abilities and functions. To explain a cognitive function, we need only specify a mechanism that can perform the function. To apply this to a machine enabled AI model, the calculus to create rules is straightforward. Find the appropriate method, apply training data sets, adjust the thresholds and compare to a different model results

Hard problems create a new challenge because it is not a problem about the performance of functions. The problem persists even when the performance of all the relevant functions is understood. We know how a process works, but we don’t know why an outcome consistently occurs,

“To explain learning, we need to explain the way in which a system’s behavioral capacities are modified in light of environmental information, and the way in which new information can be brought to bear in adapting a system’s actions to its environment. If we show how a neural or computational mechanism does the job, we have explained learning.”  Chalmers For a machine equivalent of consciousness, the hard learning issue is the key.

Machine Consciousness As mentioned earlier, the awareness of “self” describes how something knows it exists, can process inputs and convert them into a meaningful information, and activate a process  capable of creating an outcome. Under this definition, a digital process such as IA, harnesses a set of tools such as AI, to perform tasks that replace or augment a human process. One can think of IA as a physical manifestation of solutions that AI provides to address a goal. The symbolic calculus is typically software-hardware methods compatible with producing an outcome .

Turing Test
Developed in 1950 by Alan Turing, it tests a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human.
The test has become an important concept in the philosophy[7] of AI which attempts to answer questions such as:
● Can a machine act intelligently?
● Can it solve any problem that a person would solve by thinking?
● Are human and machine intelligence the same? Is a human brain essentially a computer?
● Can machines have mental states, and are conscious in the same way that human beings are?
● Can a machine feel how things are?

While the Turing test is a good operational definition of intelligence, it does not prove that a machine has a mind, consciousness, or self-awareness.

G-MAFIA and BAT
Apart from a legion of universities spearheading government research grants in AI and robotics, there 9 major global firms working on business applications that benefit from AI. The first group (G-MAFIA), consist of US companies – Google, Microsoft, Amazon, Facebook, IBM and Apple. The second group (BAT), are from China – Baidu, Alibaba and Tencent. To discuss their initiatives goes beyond our scope but introduces the question of how “intelligence” is manipulated and how is data acquired to support AI.

At the core of these questions are the ability to gather sufficient data to build an information model in which AI tools produce an IA function. Simply put, where does the data come from and how it is used?

As author Shoshana Zuboff states in The Age of Surveillance Capitalism[8], “It’s about the darkening of the digital dream and its rapid mutation into a voracious and utterly novel commercial project that I call surveillance capitalism.” She and  others question how the 9 global giants are spending massive amounts to capture data, not only on their subscribers but the exploding world of machines in IOT and IOE, the last term adapted by Cisco to support what they call FOG.

The daily volume of data flowing over the Internet backbone is staggering[10]. Estimates show that 90% of the data on the internet has been created since 2016.

IDC predicts that by 2025, the world’s volume of data will expand to 163 zettabytes – 1000% increase. Most of it will be  from IOT/IOE/FOG.

Worldwide, daily data  transfer of the Web is close to 2.5 quintillion bytes a day. The US alone produces 2,657,700 gigabytes every minute. How much is transient and how much has a half-life beyond hours is unknown. The adoption of blockchains, an immutable and permanent record of transactions (events), compounds the data footprint. What role AI plays is not established.

Surveillance and Privacy
As we move to a world of surveillance data, the picture is more complicated. These are streaming data flows. The business case of “surveillance” is focused on capturing real time data feeds: cameras, sensors, web traffic, social activity, media consumption, user preferences, products and services used. This kaleidoscope of data showers creates a digital profile of activities human or machine.

MAFIA and BAT monetize information on behalf of their customers, primarily to support advertising. By including FOG data – knowing what devices you use, what data is created and extracting information value – is a ripe target of AI + IA. This is a focus of privacy, how do we as humans participate in the value of data we produce? It’s a missing piece in how we move forward towards an information economy .

AI is poised to both streamline and improve information management for the benefit of humanity. The tradeoff is who owns this data, how it is used and shared. Information can be considered an energy input that drives a process and AI a more efficient engine.

The 9 firms mentioned in MAFIA and BAT want to be both the fuel and engines that spawns data capture and conversion into information, often at the expense of privacy. Depending in which country one resides, the governmental oversight (legislation and regulation) will play an important role. Given that governments tend to lag in embracing fast moving technologies, acceptance of AI, whether in job creation or suppression or economic importance, will not be friction-free. However, it is clear that information has value and it is necessary to capture value by including the “owners” of data as part of the value exchange.

Future – in 5 Years
The 9 firms mentioned in MAFIA and BAT want to be both the fuel and engines that spawns data capture and conversion into information, often at the expense of privacy. Depending in which country one resides, the governmental oversight (legislation and regulation) will play an important role. Given that governments tend to lag in embracing fast moving technologies, acceptance of AI, whether in job creation or suppression or economic importance, will not be friction-free. However, it is clear that information has value and it is necessary to capture value by including the “owners” of data as part of the value exchange.

Andre Szykier
UbiVault                                Founder
BlockchainBTM                  CTO
Aegis Health Security         Chief Scientist
andre@ubivault.com

“While there are huge benefits that AI offers, there are potential ethical issues it creates. – Mark Walport

[1] Marvin Minsky MIT
[2]a-working-brain-model
[3]James P. Allison
[4]Edwin Wilson
[5]Nature
[6]David Chalmers
[7]Philosophy of artificial intelligence
[8]  Age of Surveillance Capitalism
[9]  FOG: a network architecture using edge devices for computation, and storage routed over the internet.
[10] Internet Statistics –Microfocus.