Deep Learning in IoT
In the era of the Internet of Things (IoT), an enormous amount of sensory data for a wide range of fields and applications is being generated and collected from numerous sensing devices. Applying analytics over such data streams to discover new information, predict future insights, and make controlled decisions, is a challenging task, which makes IoT a worthy paradigm for business intelligence and quality-of-life improving technology. However, analytics on IoT—enabled devices requires a platform consisting of machine learning (ML) and deep learning (DL) frameworks, a software stack, and hardware (for example, a Graphical Processing Unit (GPU) and Tensor Processing Unit (TPU)).
Similar to NumPy for JVM, ND4J comes up with basic operations of linear algebra (matrix creation, addition, and multiplication). However, ND4S is a scientific computing library for linear algebra and matrix manipulation. It also provides n-dimensional arrays for JVM-based languages. The following diagram shows last year’s Google Trends, illustrating how popular TensorFlow is:
In addition to these frameworks, Chainer is a powerful, flexible, and intuitive DL framework, which supports CUDA computation. It only requires a few lines of code to leverage a GPU. It also runs on multiple GPUs with little effort. Most importantly, Chainer supports various network architectures, including feed-forward nets, convnets, recurrent nets, and recursive nets. It also supports per-batch architectures. One more interesting feature in Chainer is that it supports forward computation, by which any control flow statements of Python can be included without lacking the ability of backpropagation. It makes code intuitive and easy to debug.
The DL framework power scores 2018 also shows that TensorFlow, Keras, and PyTorch are far ahead of other frameworks
Scores were calculated based on usage, popularity, and interest in DL frameworks through the following sources. Apart from the preceding libraries, there are some recent initiatives for DL in the cloud. The idea is to bring DL capability to big data with billions of data points and high-dimensional data. For example, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform, and NVIDIA GPU Cloud (NGC) all offer machine and DL services that are native to their public clouds.
In October 2017, AWS released Deep Learning AMIs (DLAMIs) for Amazon Elastic Compute Cloud (Amazon EC2) P3 instances. These AMIs come preinstalled with DL frameworks, such as TensorFlow, Gluon, and Apache MXNet, which are optimized for the NVIDIA Volta V100 GPUs within Amazon EC2 P3 instances. The DL service currently offers three types of AMIs: Conda AMI, Base AMI, and AMI with source code.The CNTK is Azure’s open source DL service. Similar to the AWS offering, it focuses on tools that can help developers build and deploy DL applications. Azure also provides a model gallery that includes resources, such as code samples, to help enterprises get started with the service.
On the other hand, NGC empowers AI scientists and researchers with GPU-accelerated containers
The NGC features containerized DL frameworks, such as TensorFlow, PyTorch, MXNet, and more that are tuned, tested, and certified by NVIDIA to run on the latest NVIDIA GPUs on participating cloud-service providers. Nevertheless, there are also third-party services available through their respective marketplaces.
When it comes to cloud-based IoT system-development markets, currently it forks into three obvious routes:
- off-the-shelf platforms (for example, AWS IoT Core, Azure IoT Suite, and Google Cloud IoT Core), which trade off vendor lock-in and higher-end volume pricing against cost-effective scalability and shorter lead times;
- reasonably well-established MQTT configurations over the Linux stack (example: Eclipse Mosquitto); and
- the more exotic emerging protocols and products (for example, Nabto’s P2P protocol) that are developing enough uptake, interest, and community investment to stake a claim for strong market presence in the future.
As a DL framework, Chainer Neural Network is a great choice for all devices powered by Intel Atom, NVIDIA Jetson TX2, and Raspberry Pi. Therefore, using Chainer, we don’t need to build and configure the ML framework for our devices from scratch. It provides prebuilt packages for three popular ML frameworks, including TensorFlow, Apache MXNet, and Chainer. Chainer works in a similar fashion, which depends on a library on the Greengrass and a set of model files generated using Amazon SageMaker and/or stored directly in an Amazon S3 bucket.
From Amazon SageMaker or Amazon S3, the ML models can be deployed to AWS Greengrass to be used as a local resource for ML inference. Conceptually, AWS IoT Core functions as the managing plane for deploying ML inference to the edge.
Based on my research