Deep Learning Architecture for Internet of Things (IoT) : An Executive Summary

Deep Learning in IoT

In the era of the Internet of Things (IoT), an enormous amount of sensory data for a wide range of fields and applications is being generated and collected from numerous sensing devices. Applying analytics over such data streams to discover new information, predict future insights, and make controlled decisions, is a challenging task, which makes IoT a worthy paradigm for business intelligence and quality-of-life improving technology. However, analytics on IoT—enabled devices requires a platform consisting of machine learning (ML) and deep learning (DL) frameworks, a software stack, and hardware (for example, a Graphical Processing Unit (GPU) and Tensor Processing Unit (TPU)).

DL frameworks and cloud platforms for IoT

There are several popular DL frameworks. Each of them comes with some pros and cons. Some of them are desktop-based, and some of them are cloud-based platforms, where you can deploy/run your DL applications. However, most of the libraries that are released under an open license help when people are using graphics processors, which can ultimately help in speeding up the learning process. Such frameworks and libraries include TensorFlow, PyTorch, Keras, Deeplearning4j, H2O, and the Microsoft Cognitive Toolkit (CNTK). Even a few years back, other implementations, including Theano, Caffee, and Neon, were used widely. However, these are now obsolete.

Deeplearning4j (DL4J) is one of the first commercial-grade, open source, distributed DL libraries that was built for Java and Scala. This also provides integrated support for Hadoop and Spark. DL4J is built for use in business environments on distributed GPUs and CPUs. DL4J aims to be cutting-edge and Plug and Play, with more convention than configuration, which allows for fast prototyping for non-researchers. Its numerous libraries can be integrated with DL4J and will make your JVM experience easier, regardless of whether you are developing your ML application in Java or Scala.

Similar to NumPy for JVM, ND4J comes up with basic operations of linear algebra (matrix creation, addition, and multiplication). However, ND4S is a scientific computing library for linear algebra and matrix manipulation. It also provides n-dimensional arrays for JVM-based languages. The following diagram shows last year’s Google Trends, illustrating how popular TensorFlow is:

Capture.JPG

In addition to these frameworks, Chainer is a powerful, flexible, and intuitive DL framework, which supports CUDA computation. It only requires a few lines of code to leverage a GPU. It also runs on multiple GPUs with little effort. Most importantly, Chainer supports various network architectures, including feed-forward nets, convnets, recurrent nets, and recursive nets. It also supports per-batch architectures. One more interesting feature in Chainer is that it supports forward computation, by which any control flow statements of Python can be included without lacking the ability of backpropagation. It makes code intuitive and easy to debug.

The DL framework power scores 2018 also shows that TensorFlow, Keras, and PyTorch are far ahead of other frameworks

(see https://towardsdatascience.com/deep-learning-framework-power-scores-2018-23607ddf297a).

Scores were calculated based on usage, popularity, and interest in DL frameworks through the following sources. Apart from the preceding libraries, there are some recent initiatives for DL in the cloud. The idea is to bring DL capability to big data with billions of data points and high-dimensional data. For example, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform, and NVIDIA GPU Cloud (NGC) all offer machine and DL services that are native to their public clouds.

In October 2017, AWS released Deep Learning AMIs (DLAMIs) for Amazon Elastic Compute Cloud (Amazon EC2) P3 instances. These AMIs come preinstalled with DL frameworks, such as TensorFlow, Gluon, and Apache MXNet, which are optimized for the NVIDIA Volta V100 GPUs within Amazon EC2 P3 instances. The DL service currently offers three types of AMIs: Conda AMI, Base AMI, and AMI with source code.The CNTK is Azure’s open source DL service. Similar to the AWS offering, it focuses on tools that can help developers build and deploy DL applications. Azure also provides a model gallery that includes resources, such as code samples, to help enterprises get started with the service.

On the other hand, NGC empowers AI scientists and researchers with GPU-accelerated containers

(see https://www. nvidia. com/en-us/data-center/gpu-cloud-computing/).

The NGC features containerized DL frameworks, such as TensorFlow, PyTorch, MXNet, and more that are tuned, tested, and certified by NVIDIA to run on the latest NVIDIA GPUs on participating cloud-service providers. Nevertheless, there are also third-party services available through their respective marketplaces.

When it comes to cloud-based IoT system-development markets, currently it forks into three obvious routes:

  • off-the-shelf platforms (for example, AWS IoT Core, Azure IoT Suite, and Google Cloud IoT Core), which trade off vendor lock-in and higher-end volume pricing against cost-effective scalability and shorter lead times;
  • reasonably well-established MQTT configurations over the Linux stack (example: Eclipse Mosquitto); and
  • the more exotic emerging protocols and products (for example, Nabto’s P2P protocol) that are developing enough uptake, interest, and community investment to stake a claim for strong market presence in the future.

As a DL framework, Chainer Neural Network is a great choice for all devices powered by Intel Atom, NVIDIA Jetson TX2, and Raspberry Pi. Therefore, using Chainer, we don’t need to build and configure the ML framework for our devices from scratch. It provides prebuilt packages for three popular ML frameworks, including TensorFlow, Apache MXNet, and Chainer. Chainer works in a similar fashion, which depends on a library on the Greengrass and a set of model files generated using Amazon SageMaker and/or stored directly in an Amazon S3 bucket.

From Amazon SageMaker or Amazon S3, the ML models can be deployed to AWS Greengrass to be used as a local resource for ML inference. Conceptually, AWS IoT Core functions as the managing plane for deploying ML inference to the edge.


Based on my research

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s