Deep is being used in automated hearing and speech

Learning Methods  on iot:A Survey of


We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

Deep learning is a machine learning
technique that teaches   computers to do
what comes naturally to humans and help to make sense of data
such as images, sound, and text.In this
paper, we provide a

overview on a  advanced machine learningtechniques,
namely Deep Learning (DL), to facilitate the analyticsand learning in the IoT
domain that have incorporated DL in their intelligence background are also
discussed.  These
methods have dramatically improved the state-of-the-art in computer vision,
speech recognition, natural language processing (NLP) and many other domains
such as drug discovery and cancer cell detection. 

have summarize major reportedresearch attempts that leveraged Deep learning in
the IoT domain.







learning, collaborative filtering, hybrid recommender,internet of things.


vision of the Internet of Things (IoT) is to transform traditional objects to
being smart by exploiting a wide range of advanced technologies, from embedded
devices and communication technologies to Internet protocols, data analytics,
andso forth. In recent years, many IoT applications arose in differentvertical
domains.for example:health, transportation, smart home, smartcity, agriculture,
education, etc. Deep Learning (DL) has been actively utilized in many IoT
applications in recent years.


Automated Driving: Automotive
researchers are using deep learning to automatically detect objects such as
stop signs and traffic lights and even pedestrians to decrease accidents.

Aerospace and Defense: Deep learning is used to identify objects from satellites
that locate areas of interest, and identify safe or unsafe zones for troops.

Medical Research: Cancer
researchers are using deep learning to automatically detect cancer cells.

Industrial Automation: Deep learning saves the worker by automatically detecting
when people or objects are within an unsafe distance of machines.

Electronics: Deep
learning is being used in automated hearing and speech translation.


1)Deep Neural Networks (DNN)

Deep neural
network (DNN) is a multilayer percept network with many hidden layers, whose
weights are fully connected and are often initialized using stacked RBMs or DBN
31, 32. The success of DNN is that it can accommodate a larger hidden units  and performs a better parameter initialization
methods. A DNN with large number of hidden units can have better modeling

1.1  Basic terminologies of deep learning

1)Deep belief network (DBN): The generative models composed of multiple layers
of stochastic, hidden variables. The top two layers have undirected and symmetric
connections between them. The lower layers receive top-down and directed
connections from the layer above .

2)Boltzmann machine (BM): A network that are connented symmetricaly,
neuron-like units that make stochastic decisions about whether to be on or off .

3)Restricted Boltzmann
machine (RBM): This is also a one of
the special Boltzmann machine consisting of a layer of visible units and a
layer of hidden units with no visible-visible or hidden-hidden connections.

4)Deep Boltzmann machine
(DBM): This is also a one of the special
 BM where the hidden units are
organized in a deep layered manner, only adjacent layers are connected, and
there are no visible-visible or hidden-hidden connections within the same layer.

5)Deep neural network (DNN):
A multilayer network with many hidden
layers, whose weights are fully connected and are often initialized  using stacked RBMs or DBN.

6)Deep auto-encoder: A DNN whose output targets  the data input itself, often pre-trained with
DBN or using distorted training data to regularize the learning.

representation: The observed data in such a way
that they are modeled as being generated by the interactions of many hidden
factors. A particular factor learned from the configurations  can often generalize well.

Distributed representations form
the basis of deep learning.


Most deep learning
methods use neural network architectures.

The term “deep”
usually refers to the number of hidden layers in the neural network.
Traditional neural networks only contain 2-3 hidden layers, while deep networks
can have as many as 150 layers.


The original goal of
the ANN is to solve problems in the same way that a human brain would. ANNs have been used on a variety of
tasks, including computer
 vision,speech recognition, machine translation, social
network filtering, playing board and video games and medical

3) Convolutional Neural
Network (CNN)

Convolutional Neural Network is a type of deep learning model
with each module consisting of a convolutional layer and a pooling layer. These
modules are often stacked up with one on top of another, or with a DNN on top
of it, to form a deep model.
It is very similar to ordinary neural networks. These are actually made of
neurons which consists of learnable weights and biases and where each neuron get some inputs ,
performs a dot product operation of these inputs and conditionally follows it
with non-linearity.

This is
usually explained in the architecture of this model where each neuron when
receiving inputs make it to transform through a series of hidden layers. Now
each hidden layer consists of neurons where each neuron is fully connected to
all the previous neurons and these neurons in a single layer function
independently and thus making them not to share connections with others. The
finally connected layer is the “output layer” and it represents class scores in
classification system. a single fully-connected neuron in a first hidden layer
of a regular Neural Network would have 32*32*3 = 3072 weights

There are
three main parameters that control the output volume of  the convolution layer. They are:

1. Depth

2. Stride

3. Zero



main advantage of convolution neural networks is the inputs are represented in
a image format and this system is a more sensible way of neural networks.

   The applications
of convolution neural networks are

    1. Image recognition

2. Video analysis


    4. Go

    5. Fine-tuning


Figure 1.1: Architecture of CNN

4)Recurrent Neural Networks
(RNNs): The input to an RNN

of both the current sample and the previous observed

and the output of an RNN at time step t=1

the output at time step t. Each neuron is
equipped with

feedback loop that returns the current output as an input for

the next step.

5)Long Short Term Memory (LSTM):

is an extension of RNNs.It uses the

of gates for its units and it computes a value between

and 1 based on their input. The feedback loop

store the information and each neuron in LSTM (also called a

cell) has a multiplicative forget gate, read gate, and

gate. These gates are introduced to control the access

memory cells and to prevent them from perturbation by

inputs. When the forget gate is active, the neuron

its data into itself. When the forget gate is turned off

sending a 0, the neuron forgets its last content. When the

gate is set to 1, other connected neurons can write to that

If the read gate is set to 1, the connected neurons can

read the content of the neuron.

6)Autoencoders (AEs):

consist of an input layer and an output layer that are

through one or more hidden layers. AEs have the

same number of input and output units.

7) Variational Autoencoders (VAEs)

Adversarial Networks (GANs)

9)Ladder Networks:

networks were proposed in 2015 by Valpola et
al. 30

support unsupervised learning. This ladder network performs variety of
functions such as handwritten

recognition and image classification. The architecture of a ladder network
consists of two encoders and one decoder.

correlation properties of the
observed or visible data for pattern analysis or synthesis purposes.

Fig. 2.1. Ladder network structure
with two layers

Fig. 3.1  Structure of a recurrent neural network
10 )Architectures of Deep Learning

10.1 Generative deep
architectures: which are intended to
characterize the high-order correlation properties of the observed or visible
data for pattern analysis or synthesis purposes,

10. 2 Discriminative deep
architectures: which are intended to
directly provide discriminative power for pattern classification,

10.3 Hybrid deep
architectures:where the goal is
discrimination but is assisted (often in a significant way) with the outcomes
of generative architectures via better optimization or/and regularization

4.1. Google Trend showing more attention toward deep learning in recent


Fig. 5.1 . The overall mechanism of
training of a DL model.

Islanding Detection Methods

This section provides an
overview of various islanding detection methods. There are three

major categories for
islanding detection methods: passive resident methods, active resident

and communication-based methods.

Passive Resident Methods

Passive resident methods
are based on the detection of abnormalities in electrical signals at

PCC of a DG unit.

Active Resident Methods

An active resident method
arti¯cially creates abnormalities in the PCC signals that can be

subsequent to an islanding event.

Communication-Based Methods

methods are based on transmission of data between a DG unit and

the host utility system.
The data is analyzed by the DG unit to determine if the operation

the DG should be halted.

Global strategies

Deep learning provides two main improvements over the
traditional machines. They are:

1.They simply reduce the need for hand crafted and
engineered feature set to be used exclusively for training purpose.

2.They increase the accuracy of the prediction model for
larger amounts of data