Deep Learning made easy with Deep Cognition

--

Deep Cognition, Inc.

This past month I had the luck to meet the founders of DeepCognition.ai. Deep Cognition breaks the significant barrier for organizations to be ready to adopt Deep Learning and AI through Deep Learning Studio.

What is Deep Learning?

Before continuing and describe how Deep Cognition simplifies Deep Learning and AI, lets first define the main concepts for Deep Learning.

Deep learning is a specific subfield of machine learning, a new take on learning representations from data which puts an emphasis on learning successive “layers” of increasingly meaningful representations.

Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction.

These layered representations are learned via models called “neural networks”, structured in literal layers stacked one after the other.

Actually what we use in Deep Learning is something called artificial neural network (ANN), that’s a network inspired by biological neural networks which are used to estimate or approximate functions that can depend on a large number of inputs that are generally unknown.

Although deep learning is a fairly old subfield of machine learning, it only rose to prominence in the early 2010s. In the few years since, it has achieved great things, François Chollet list following breakthroughs of Deep Learning:

  • Near-human level image classification.
  • Near-human level speech recognition.
  • Near-human level handwriting transcription.
  • Improved machine translation.
  • Improved text-to-speech conversion.
  • Digital assistants such as Google Now or Amazon Alexa.
  • Near-human level autonomous driving.
  • Improved ad targeting, as used by Google, Baidu, and Bing.
  • Improved search results on the web.
  • Answering natural language questions.
  • Superhuman Go playing.

Why Deep Learning?

As François Chollet states in his book until the late 2000s, we were still missing a reliable way to train very deep neural networks. As a result, neural networks were still fairly shallow, leveraging only one or two layers of representations, and so they were not able to shine against more refined shallow methods such as SVMs or Random Forests.

But in this decade, with the development of several simple but important algorithmic improvements, the advances in hardware (mostly GPUs), and the exponential generation and accumulation of data, with the help of Deep Learning nowadays it’s possible to run small deep learning models on your laptop (or in the cloud).

How do we do Deep Learning?

Let’s see how we normally do Deep Learning.

Even though this is not a new field, what is new are the ways we can interact with the computer to do Deep Learning. And one of the most important moments for this field was the creation of TensorFlow.

TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them.

What are tensors?

Tensors, defined mathematically, are simply arrays of numbers, or functions, that transform according to certain rules under a change of coordinates.

But in this scope a tensor is a generalization of vectors and matrices to potentially higher dimensions. Internally, TensorFlow represents tensors as n-dimensional arrays of base datatypes.

We need tensors because what NumPy (the fundamental package for scientific computing with Python) lacks is creating Tensors. We can convert tensors to NumPy and vice­versa. That is possible since the constructs are defined definitely as arrays/matrices.

TensorFlow combines the computational algebra of compilation optimization techniques, making easy the calculation of many mathematical expressions that would be difficult to calculate, instead.

Keras

This is not a blog about TensorFlow, there are great ones. But it was neccesary to introduce Keras.

Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.

This was created by François Chollet and was the first serious step for making Deep Learning easy for the masses.

TensorFlow has a Python API which is not that hard, but Keras made really easy to get into Deep Learning for lots of people. It should be noted that Keras is now officially a part of Tensorflow: https://www.tensorflow.org/api_docs/python/tf/contrib/keras

Deep Learning Frameworks

I made a comparison between Deep Learning frameworks.

Keras is the winner for now, it is interesting to see that people prefers an easy interface and usability.

If you want more information about keras visit this post I made on LinkedIn: https://www.linkedin.com/feed/update/urn:li:activity:6344255087057211393

Deep Cognition

So normally we do Deep Learning programming, and learning new APIs, some harder than others, some are really easy an expressive like Keras, but how about a visual API to create and deploy Deep Learning solutions with the click of a button?

This is the promise of Deep Cognition.

As they say The Deep Cognition platform was founded to “democratize AI”.

Artificial intelligence is already creating significant value for the world economy. There is a (big) shortage of AI expertise though that creates a significant barrier for organizations ready to adopt AI. And this is what they are solving.

Their platform, Deep Learning Studio is available as cloud solution, Desktop Solution ( http://deepcognition.ai/desktop/ ) where software will run on your machine or Enterprise Solution ( Private Cloud or On Premise solution).

The Desktop version allows people to use their own computers with GPU without hourly fee.

For this we will be using the Cloud version of the Deep Learning Studio. This is a single-user solution for creating and deploying AI. The simple drag & drop interface helps you design deep learning models with ease.

Pre-trained models as well as use built-in assistive features simplify and accelerate the model development process. You can import model code and edit the model with the visual interface. The platform automatically saves each model version as you iterate and tune hyperparameters to improve performance. You can compare performance across versions to find your optimal design.

MNIST with Deep Cognition and AutoML

Deep Learning Studio can automagically design a deep learning model for your custom dataset thanks to our advance AutoML feature. You will have good performing model up and running in minutes.

And yes AutoML is what you think, automatic Machine Learning, here applied specifically to Deep Learning, and it will create for you a whole pipeline to go from raw data into predictions.

As a small tutorial / try, of the Deep Learning Studio let’s study the classical MNIST.

MNIST is a simple computer vision dataset. It consists of images of handwritten digits like these:

It also includes labels for each image, telling us which digit it is.

Let’s train a model to look at images and predict what digits they are using Deep Cognition Cloud Studio and AutoML.

When you have an account you just need to enter in the http://deepcognition.ai webpage and click on Launch Cloud App.

Now this will take you to the UI, you’ll see that you can choose from some sample projects:

Or create a new project that is what we are going to do now:

This will take you to a page where you can choose the training-validation-test ratio, load a dataset or used an already uploaded one, specify the types of your data and more.

The Model tab will allow you to create your own models using advance Deep Learning features and different types of layers and neural networks, but we will use the AutoML feature so Deep Cognition take care of all of the modeling:

We choose Image because this is the type of data that we are trying to predict.

After you click Design you will have your first DL model available to customize and analyze:

The model looks like this:

So you can see that all the complexity of modeling for Deep Learning and coding has been simplified a LOT with this great platform.

If you want you can also code in a Jupyter Notevook inside the platform, with all the necessary installations already done for you:

The reason is that neural networks are notoriously difficult to configure and there are a lot of parameters that need to be set. Hyperparameter tuning is the hardest in neural network in comparison to any other machine learning algorithm.

But with Deep Cognition this can be done really easy and in a very flexible way, in the HyperParameters tab you can choose from several Loss functions and Optimizers to tune your parameters.

Now the funny part. Training your model. In the Training tab you can choose from different types of instances (with CPU and GPU) support to to this. It will also help you monitor your traning process and create a Loss and Accuracy graph for you:

Above there is an small gif of the training process.

The results your traninig can be found in the Results tab. You will have there all your runs.

And finally you can use this model you have trained for the testing and validation set (or other you can upload) and see how well it performs when predicting the digit from an image.

The blackbox problem

Something that will come yo your mind is: ok I’m doing deep learning but I have no idea how.

Because of that you can actually download the code that produced the predictions, and as you will see it is written in Keras. You can then upload the code and test it with the notebook that the system provides.

The AutoML features have the best of Keras and other DL frameworks in a simple click, and the good thing about it is that it chooses the best practices for DL for you, and if you are not completely happy with the choices you can change them really easy in the UI or interact with the notebook.

This system is built with the premise of making AI easy for everyone, you don’t have to be an expert when creating this complex models, but my recommendation is that is good that you have an idea of what you are doing, read some of the TensorFlow or Keras documentation, watch some videos and be informed. If you are an expert in the subject great! This will make your life much easier and you can still apply your expertise when building the models.

Remember checking the references for more information about Deep Learning and AI.

About Favio Vázquez

Physicist and computer engineer. He holds a Master’s Degree in Physical Sciences from UNAM. He works in Big Data, Data Science, Machine Learning and Computational Cosmology. Since 2015, he has been part of the collaboration of Apache Spark in the Core and MLlib library.

He’s Chief Data Scientist at Iron performing distributed processing, data analysis, machine learning and directing data projects for the company. In addition, he works at BBVA Data & Analytics as a data scientist performing machine learning, doing data analysis, maintaining the life cycles of the projects and models with Apache Spark.

References

--

--

Data scientist, physicist and computer engineer. Love sharing ideas, thoughts and contributing to Open Source in Machine Learning and Deep Learning ;).