NstaCPUbiz Pages

Thursday, October 18, 2018

Why Deep Thinking and Learning Matters and What’s Next for Artificial Intelligence/ Expert Systems.


Why Deep Thinking and Learning Matters and What’s Next for Artificial Intelligence/ Expert Systems.

How Deep Thinking and Learning is Impacting Everything

There is a great difference between knowledge and intelligence. --  Fortune Cookie

It’s almost impossible to escape the impact frontier technologies are having on everyday life.

At the core of this impact are the advancements of artificial intelligence, expert systems, machine learning, deep thinking and deep learning.

These change agents are ushering in a revolution that will fundamentally alter the way we live, work, play and communicate akin to the industrial revolution – more specifically, AI is the new industrial revolution, of the information age.

The most exciting and promising of these frontier technologies is the advancements happening in the deep thinking and learning space.

While still nascent, it’s deep thinking and learning percolating into your smartphones, and tablets driving advancements in healthcare, creating efficiencies in the power grid, improving agricultural yields, climbing the information mountain we're in and helping us find solutions to climate change and other hereto seemingly unsurmountable issues of our times.

Just this year a handful of high-profile experiments came into the spotlight, including Microsoft Tay, Google’s DeepMind AlphaGo, and Facebook M ...and highlight the versatility of deep thinking, learning and the application of AI and expert systems.

For instance, Google DeepMind has been used to master the game of Go, cut their data center energy bills by reducing power consumption by 15%, and even working with NHS to fight blindness and DHS to control our borders.

“Deep Thinking & Learning is an amazing set of  tools, that is helping numerous groups create exciting AI ans expert systems applications,” Andrew Ng says, Chief Scientist at Baidu and chairman/ co-founder of Coursera. “It is helping us build self-driving cars, accurate speech recognition, computers that can understand images, face recognition and much more.”

These experiments all rely on a technique known as deep thinking and learning, which attempts to mimic the layers of neurons in the brain’s neocortex. This idea – to create an artificial neural network by simulating how the brain works – has been around since the 1950s in one form or another.

Deep thinking and learning is a subset of a subset of artificial intelligence, and expert systems which encompasses most logic and rule-based systems designed to solve complex problems with the power of quantum computing technology. Within AI, and Expert Systems you have machine thinking and learning, which uses a suite of algorithms to go through data to make and improve the decision making and diagnostic processes. And, within machine thinking and learning you come to deep thinking and learning, which can make sense of data using multiple layers of abstraction, much like the human mind it simulates.

The artificial intelligence, and expert systems machine thinking and learning; deep thinking and learning relationship.

During the training process, a deep neural network learns to discover useful patterns in the digital representation of data, like sounds and images... In particular, this is why we’re seeing more advancements for image recognition, machine translation, and natural language processing come from deep thinking and learning.

One example of deep learning and thinking in the wild is how Facebook can automatically organize photos, identify faces, and suggest which friends to tag...

.Or., how Google can programmatically translate 103+ languages with extreme accuracy.

Data, GPUs, and Why Deep Thinking, and Learning Matters

It’s been more than a half-century since the science behind deep thinking and learning was discovered, but why is it just now starting to transform the world?

The answer lies in two major shifts: an abundance of digital data and access to powerful GPUs, not to mention Quantum Computing Technology.

Massive Digital Data vs Cost of Storage

Together, we are now capable of teaching computers to read, see, and hear simply by throwing enough data and  super-computer power and quantum computer at the problems.

There’s a special kind of irony reserved for all of these new breakthroughs that are really just the same breakthrough: deep neural networks and the exponential quantum neural nets.

The basic concept of deep thinking and learning reach back to the 1950s, but were largely ignored till the 1980s and 90s when we begab to have super-computing technology and power. What’s changed, however, is the context of abundant computation systems, neural networks and big data.

We now have access to, essentially, unlimited computational power thanks to Moore’s law, quantum computing and the cloud. On the other side, we’re creating more image, video, audio, and text data everyday than before due to the proliferation of smartphones, tablets and cheap sensors.

“This is deep learning’s Cambrian explosion,” Frank Chen says, partner at the Andreessen Horowitz.

And it’s happening faster than technology can keep up, for when a system is built, it's already obosolete.

Four years ago, Google had just two deep thinking and learning projects. Today, the search giant is infusing deep thinking and learning into everything it touches: Search, Gmail, Maps, translation, YouTube, their self-driving cars, other gadgets and more.

“We will move from mobile first to an AI/ Expert Systems, first world,” Google’s CEO, Sundar Pichai said earlier this year.

What’s Next for Machine Intelligence?

In a very real sense, we’re teaching machines to teach themselves.

“AI and Expert Systems are the new electricity,” Ng says. “Just as 100 years ago electricity transformed industry after industry, AI/ Expert Systems, will now do the same.”

Despite the breakthroughs, deep thinking and learning algorithms, still they can’t reason to the levels, the way humans do. That could change soon, though.

Yann LeCun, Director of AI/ Expert Systems Research at Facebook and Professor at NYU, says deep thinking and learning combined with reasoning, planning and testing is one area of research making promising advances right now, he says. Solving this in the next five years isn’t out the realm of possibilities.

“To enable deep thinking and learning systems to reason, we need to modify them so that they don’t produce a single output, say the interpretation of an image, the translation of a sentence, etc., but can produce a whole set of alternative outputs. e.g the various ways a sentence can be formed or structured and  translated,“ LeCun says.

Yet, despite plentiful data, and abundant computing power, deep thinking and learning is still, very hard.

Available Developed Machine Thinking and Learning; Deep Thinking and Learning Developers.

Image from Stack Overflow 2016 Developer Survey

One bottleneck is the lack of developers trained to use these deep thinking and learning mechanism, methodologies and techniques. Deep machine thinking and learning is already a highly specialized domain, and those with the knowledge to train deep thinking and learning models and deploy them into production are even more select and specialized.

For instance, Google can’t recruit enough developers with vast deep thinking and learning expertise and experience. Their solution is to simply teach their developers to use these mechanism, methodologies and techniques instead.

.Or., when Facebook’s engineers struggled to take advantage of deep machine thinking and learning, they created an internal tool for visualizing machine and deep thinking and learning workflows, called FBThinker/Learner Flow.

But, where does that leave the other 99% of developers that don’t work at one of these top tech company?

Very few people in the world know how to use these tools.

“Deep machine thinking and learning is a complicated field,” S. Somasegar says, venture partner at Madrona Venture Group and the former head of Microsoft’s Developer Division. “If you look up the Wikipedia page on deep thinking and learning, you’ll see 18 subcategories underneath Deep Neural Network Architectures with names such as Convolutional Neural Networks, Spike-and-Slab RBMs, and LTSM-related structured/ unstructured differentiable data constructs and their memory structures as memory mapped and utilized...”

“These are not topics that a typical software developer will immediately understand.”

Yet, the number of companies that want to process unstructured data, like images or text, is rapidly increasing. The trend will continue, primarily because deep thinking and learning mechanisms, methodologies, and techniques are delivering impressive results.

That’s why it’s important for the people capable of training neural nets are also able to share their work with as many people as possible. In essence, democratizing access to machine intelligence algorithms, tools, mechanism, methodologies and techniques.

Algorithmic Intelligence For All

Every industry needs machine intelligence.

Parallel GPUs and Quantum Computers  on-demand and running in the cloud, eliminate the manual work required for teams and organizations to experiment with cutting-edge, deep thinking and  learning algorithms and models, which allows them to get started for a fraction of the cost.

“Deep thinking and learning has proven to be remarkably powerful, but it is far from plug-n-play,” Oren Etzioni says, CEO of the Allen Institute for Artificial Intelligence/Expert Systems. “That’s where techonolgy like  Algorithmia’s  comes in – to accelerate and streamline the use of deep thinking and learning.”

Quantum Computing and NVIDIA GPUs for Deep Thinking and Learning

While GPUs were originally used to accelerate graphics and video games, more recently they’ve found new life powering AI/ Expert Systems and deep thinking and learning tasks, like natural language understanding, and image recognition.

“We’ve had to build a lot of the technology and configure all of the components required to get GPUs to work with these deep thinking and learning frameworks in the cloud, especially the quamtun variety,” Kenny Daniel says, Algorithmia founder and CTO. “The GPU was never designed to be shared in a cloud service like this.”

Hosting deep thinking and learning models in the cloud can be especially challenging due to complex hardware and software dependencies. While using GPUs in the cloud are still nascent, they’re essential for making deep thinking and learning tasks performant.

“For anybody trying to go down the road of deploying their deep learning models into a production environment, they’re going to run into problems pretty quickly,” Daniel says. “Using GPUs inside of containers is a challenge. There are driver issues, system dependencies, and configuration challenges. It’s a new space that’s not well-explored, yet. There’s not a lot of people out there trying to run multiple parallel GPU jobs inside a Docker container .or. containers."

“We’re dealing with the coordination needed between the cloud providers, the hardware, and the dependencies to intelligently schedule work and share GPUs, in parallel, perhaps virtually so that users don’t have to.”

How Deep Thinking and Learning Works

Most commercial deep thinking and learning products use “supervised thinking and  learning” to achieve their goals and objectives.

For instance, in order to recognize a cat in a photo, a neural net will need to be trained with a set of labeled data. This tells the algorithm that there is a “cat” hopefully in 3D, represented in this image, .or. there is not a “cat” in this photo. If you throw enough images created from different angles at the neural network, it will, indeed, learn to identify a “cat” in an image, any "cat" image.

Producing large, labelled datasets is an achilles heel for most deep thinking and learning projects, however.

“Unsupervised thinking and learning,” on the other hand, is how deep thinking and learning works and enables us to discover new patterns and insights by approaching problems with little, limited .or. no idea, what our results should look like.

In 2012, Google and Stanford let a neural net loose on 10+ million YouTube, stills. Without any human interaction, the neural net learned to identify "cat" faces from the YouTube stills, effectively identifying patterns in the data and teaching itself what parts of the images might be relevant .or. not.

The important distinction between supervised and unsupervised learning is that there is no feedback loop with unsupervised learning. Meaning, there’s no human there correcting mistakes or scoring the results etc. We feel, a human must always be in the loop, to aleviate the concern that these machines would take over the wold, cinema media concerns as per iRobot -- Issac Assimov.

There’s a bit of a gotcha here: we don’t really know how exactly, deep thinking and learning works, yet. Nobody can actually program a computer to do these things specifically, comprehensively and completely. We feed massive amounts of data into deep neural nets, sit back, and let the algorithms learn to recognize various patterns contained within... and next level to approach, is thinking "out of the box."

“You essentially have software rewriting itself and more software,” says Jen-Hsun Huang, CEO of GPU leader NVIDIA says.

When we master unsupervised thinking and learning, we’ll have machines that will be capable and have the capacities for thinking "out of the box," to unlock aspects about our world, previously out of our reach, recognition and cognition.

“In computer vision, we get tantalizing glimpses of what the deep networks are actually doing and capable of,” Peter Norvig, research director at Google says. “We can identify line recognizers at one level, then, say, tactile, eye and nose recognizers at a higher level, followed by face recognizers above that and finally whole person and crowd, recognizers.”

Understand Deep Learning

In other areas of research, Norvig says, it has been hard to understand what the neural networks are doing and capable of.

“In speech recognition, computer vision object recognition, the game of Go, and other fields, the difference has been dramatic,” Norvig says. “Error rates go down when you use deep thinking and learning, and both these fields have undergone a complete transformation in the last few years with quatum technology. Understand that nature is quantum. Essentially all the teams have chosen deep thinking and learning, because, it just works.”

In 1950, Alan Turing wrote “We can only see a short distance ahead, but we can see plenty there that needs to be done.” Turing’s words hold true.

“In the next decade, AI/ Expert Systems will transform society,” Ng says. “It will change what we do and can get done vs. what we get computers to do and get done, for us.”

“Deep thinking and learning has already helped AI/ Expert Systems, make tremendous progress,” Ng says, “but the best is yet to come!”

.

No comments:

Post a Comment