CHAPTER-7

Experiments and Outcomes

The accuracy of the bounding bins and the prediction share within the outcomes is determined by the

1) Batch measurement

2) Studying charge

three) Variety of coaching iterations

Batch measurement is dataset educated per batch in a single iteration of coaching.

Studying Price is the coaching parameter that controls the scale of weight and bias adjustments throughout studying.

Variety of Iterations is the variety of coaching iterations after which the community is optimally educated.

IOU: Intersection over Union is an analysis metric used to measure the accuracy of an object detector on a specific dataset.

Within the numerator we compute the world of overlap between the expected bounding field and the ground-truth bounding field. The denominator is the world of union, or extra merely, the world encompassed by each the expected bounding field and the ground-truth bounding field. Dividing the world of overlap by the world of union yields our ultimate rating the Intersection over Union.

7.1 Establishing Optimum parameters

This part of the challenge offers with experimenting with numerous coaching parameters are used to resolve on an optimum set of parameters.

7.1.1 Batch Dimension

Batch sizes have been set to 10 and experiments have been carried out.

Batch measurement 10 proved optimum with a quick coaching pace and likewise environment friendly predictions when examined.

7.1.2 Studying Price

1. Studying Price (Zero.0001)

Exercise : Jogging

Exercise : Mendacity Down

Exercise : Sitting

Exercise : Stairs

Exercise : Standing

Exercise : Strolling

7.1.three Classification Report for CNN :

7.1.four Classification Report for RNN :

7.1.four Accuracy and Error charge of CNN :

7.1.5 Accuracy and Error charge of RNN :

7.2 Comparability Graph

The developed fashions are in contrast for the effectivity (RNN and CNN) for Human Exercise Classification.

CHAPTER-Eight

Conclusion

This challenge is predicated on Convolution Neural Networks and Recurrent Neural Networks to categorise Human Actions 0n day by day foundation and evaluate the accuracy of each the strategies.

The primary goal of the proposed system is achieved utilizing the next modules:

Utilizing the custom-made commonplace dataset for the system consisting of numerical information of all 4 sorts of courses. The Jupyter pocket book and a python script is used for producing the labels.

The mannequin for each Convolution Neural Networks and Recurrent Neural Networks was developed to which the usual dataset was uploaded.

Coaching the mannequin for accuracy, precision and recall values from the suitable fashions.

The accuracy obtained from Convolution Neural Community is 95.02% and is lesser in comparison with Recurrent Neural Networks, the place within the accuracy obtained from RNN Mannequin is 98.44%.

The loss and error share can be much less in Recurrent Neural Networks(RNN Mannequin),when in comparison with that of Convolution Neural Community.

The variations within the accuracy, loss, error values between Convolution Neural Networks and Recurrent Neural Networks is as a result of CNN take a hard and fast measurement enter and generate fixed-size outputs and the variations of multilayer perceptrons that are designed to make use of minimal quantities of preprocessing.

RNN, in contrast to feedforward neural networks(CNN), can use their inside reminiscence to course of arbitrary sequences of inputs and thus produce extra accuracy than CNN.

Future Work

The proposed system offers a technique for Human Exercise Classification utilizing numerical information based mostly deep studying. The system could be developed to automate the method. A number of the future enhancements for the proposed system could be :

Bettering the system for information capturing by means of sensors.

Bettering the accuracy of CNN Mannequin.

Extra environment friendly mannequin to be developed to categorise human activites.

Growing mannequin for capturing and classifying extra courses(Human activites).

Capturing extra actual time labelled for enhancing the effectivity.

REFERENCES

[1] L. Chen, J. Hoey, C. D. Nugent, D. J. Prepare dinner, and Z. Yu, “Sensor-based exercise recognition,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 42, no. 6, pp. 790–808, Nov. 2012.

[2] O. D. Lara and M. A. Labrador, “A survey on human exercise recognition utilizing wearable sensors,” IEEE Commun. Surveys Tuts., vol. 15, no. three, pp. 1192–1209, third Quart., 2013.

[3] A. Bulling, U. Blanke, and B. Schiele, “A tutorial on human exercise recognition utilizing body-worn inertial sensors,” ACM Comput. Surv., vol. 46, no. three, p. 33, 2014.

[4] N. Ravi, N. Dandekar, P. Mysore, and M. L. Littman, “Exercise recognition from accelerometer information,” in Proc. 17th Conf. Innov. Appl. Artif. Intell. (IAAI), vol. three, 2005, pp. 1541–1546.

[5] L. Bao and S. S. Intille, “Exercise recognition from user-annotated acceleration information,” in Pervasive Computing. Berlin, Germany: Springer, 2004, pp. 1–17.

[6] W. Jiang and Z. Yin, “Human exercise recognition utilizing wearable sensors by deep convolutional neural networks,” in Proc. 23rd ACM Int. Conf. Multimedia ACM, 2015, pp. 1307–1310.

[7] G. M. Weiss and J. W. Lockhart, “The impression of personalization on smartphone-based exercise recognition,” in Proc. AAAI Workshop Exercise Context Symbolize., Techn. Lang., 2012, pp. 98–104.

[8] Ok. Altun and B. Barshan, “Human exercise recognition utilizing inertial/magnetic sensor models,” in Proc. Int. Workshop Hum. Behav. Perceive. Berlin, Germany: Springer, 2010, pp. 38–51.

[9] Z. Wang, M. Jiang, Y. Hu, and H. Li, “An incremental studying methodology based mostly on probabilistic neural networks and adjustable fuzzy clustering for human exercise recognition by utilizing wearable sensors,” IEEE Trans. Inf. Technol. Biomed., vol. 16, no. four, pp. 691–699, Jul. 2012.

[10] D. Coskun, O. D. Incel, and A. Ozgovde, “Cellphone place/placement detection utilizing accelerometer: Impression on exercise recognition,” in Proc. IEEE 10th Int. Conf. Intell. Sensors, Sensor Netw. Inf. Course of. (ISSNIP), Apr. 2015, pp. 1–6.

Appendix : Code

HAR.py :import pandas as pdimport numpy as npimport matplotlib.pyplot as pltfrom scipy import stats

from keras.fashions import Sequential

from keras.layers import Dense, Conv2D, MaxPooling2D, Flatten, Dropout, LSTM

from keras.optimizers import Adam

from sklearn.metrics import classification_reportimport glob

import h5py

import itertoolspercentmatplotlib inline

from pathlib import Path

random_seed = 611

np.random.seed(random_seed)

def readData(filePath):

columnNames = [‘user_id’,’activity’,’timestamp’,’x-axis’,’y-axis’,’z-axis’]

information = pd.read_csv(filePath,header = None, names=columnNames,na_values=’;’)

return information

def read_multiple_data(recordsdata):

column_names = [‘user-id’,’activity’,’timestamp’,’x-axis’,’y-axis’,’z-axis’]

df = pd.concat([pd.read_csv(f,header = None, names = column_names,sep = ‘,’) for f in glob.glob(files)],ignore_index = True)

return dfdef featureNormalize(dataset):

mu = np.imply(dataset,axis=Zero)

sigma = np.std(dataset,axis=Zero)

return (dataset-mu)/sigma

def plotAxis(axis,x,y,title):

axis.plot(x,y)

axis.set_title(title)

axis.xaxis.set_visible(False)

axis.set_ylim([min(y)-np.std(y),max(y)+np.std(y)])

axis.set_xlim([min(x),max(x)])

axis.grid(True)

def plotActivity(exercise,information):

fig,(ax0,ax1,ax2) = plt.subplots(nrows=three, figsize=(15,10),sharex=True)

plotAxis(ax0,information[‘timestamp’],information[‘x-axis’],’x-axis’)

plotAxis(ax1,information[‘timestamp’],information[‘y-axis’],’y-axis’)

plotAxis(ax2,information[‘timestamp’],information[‘z-axis’],’z-axis’)

plt.subplots_adjust(hspace=Zero.2)

fig.suptitle(exercise)

plt.subplots_adjust(prime=Zero.9)

plt.present()def home windows(information,measurement):

begin = Zero

whereas begin< information.rely():

yield int(begin), int(begin + measurement)

begin+= (measurement/2)

def segment_signal(information, window_size = 90):

segments = np.empty((Zero,window_size,three))

labels= np.empty((Zero))

for (begin, finish) in home windows(information[‘timestamp’],window_size):

x = information[‘x-axis’][start:end]

y = information[‘y-axis’][start:end]

z = information[‘z-axis’][start:end]

if(len(information[‘timestamp’][start:end])==window_size):

segments = np.vstack([segments,np.dstack([x,y,z])])

labels = np.append(labels,stats.mode(information[‘activity’][start:end])[0][0])

return segments, labels

dataset = read_multiple_data(‘enter/WISDM_at_v2.0_raw_modifya*’)

dataset.dropna(axis = Zero, how = ‘any’ , inplace = True)

segments, labels = segment_signal(dataset)

class_labels = np.distinctive(labels)

labels = np.asarray(pd.get_dummies(labels),dtype = np.int8)

labelsarray([[0, 0, 0, 0, 0, 1],

[0, 0, 0, 0, 0, 1],

[0, 0, 0, 0, 0, 1],

…,

[0, 1, 0, 0, 0, 0],

[0, 1, 0, 0, 0, 0],

[0, 1, 0, 0, 0, 0]], dtype=int8)

numOfRows = segments.form[1]

numOfColumns = segments.form[2]

print(numOfRows, numOfColumns)

90 three

CNN

reshapedSegments = segments.reshape(segments.form[0], numOfRows, numOfColumns,1)

print(reshapedSegments.form)

(11628, 90, three, 1)

sr = np.random.rand(len(reshapedSegments)) < Zero.Eight

c_X_train = reshapedSegments[sr]

c_X_test = reshapedSegments[~sr]

c_X_train = np.nan_to_num(c_X_train)

c_X_test = np.nan_to_num(c_X_test)

c_Y_train = labels[sr]

c_Y_test = labels[~sr]

print(c_X_train.form, c_Y_train.form, c_X_test.form, c_Y_test.form)

(9277, 90, three, 1) (9277, 6) (2351, 90, three, 1) (2351, 6)

cnn_model = Sequential()cnn_model.add(Conv2D(128, (2,2),input_shape=(90, three, 1),activation=’relu’))

cnn_model.add(MaxPooling2D(pool_size=(2,2),padding=’legitimate’))

cnn_model.add(Dropout(Zero.2))

cnn_model.add(Flatten())

cnn_model.add(Dense(128, activation=’relu’))

cnn_model.add(Dense(128, activation=’relu’))

cnn_model.add(Dense(6, activation=’softmax’))

adam = Adam(lr = Zero.0001, decay=1e-6)

cnn_model.compile( loss=’categorical_crossentropy’,

optimizer=adam,

metrics=[‘accuracy’]

)

cnn_model.match( c_X_train,

c_Y_train,

validation_data=(c_X_test, c_Y_test),

epochs=10,

batch_size=10

)

cnn_score = cnn_model.consider(c_X_test,c_Y_test)

print(cnn_score)

2351/2351 [==============================] – 0s 198us/step

[0.18344726768864603, 0.9502339430029775]

Y_cnn_pred = cnn_model.predict_classes(c_X_test)

print(Y_cnn_pred.form, c_Y_test.form)

(2351,) (2351, 6)

cnn_model_str= cnn_model.to_json()f = Path(“mannequin/cnn_model_str.json”)

f.write_text(cnn_model_str)

2598

cnn_model.save_weights(‘mannequin/cnn_model_weights.h5’)

LSTM

new_shaped_data = segments.reshape(segments.form[0], numOfRows, numOfColumns)

print(new_shaped_data.form)

(11628, 90, three)

sratio = np.random.rand(len(new_shaped_data)) < Zero.Eight

X_train = new_shaped_data[sratio]

X_test = new_shaped_data[~sratio]

X_train = np.nan_to_num(X_train)

X_test = np.nan_to_num(X_test)

Y_train = labels[sratio]

Y_test = labels[~sratio]

lstm_model = Sequential()lstm_model.add(LSTM(128, input_shape=(90, three)))

lstm_model.add(Dropout(Zero.25))

lstm_model.add(Dense(6, activation=’softmax’))

lstm_model.compile( loss=’binary_crossentropy’,

optimizer=adam,

metrics=[‘accuracy’]

)

lstm_model.match( X_train,

Y_train,

validation_data=(X_test, Y_test),

epochs=10,

batch_size=10

)

lstm_score = lstm_model.consider(X_test,Y_test,verbose=2)

print(lstm_score)

[0.05474479000914698, 0.9843684068751527]

Y_lstm_pred = lstm_model.predict_classes(X_test)

print(Y_lstm_pred.form, Y_test.form)

(2367,) (2367, 6)

lstm_model_str= lstm_model.to_json()f = Path(“mannequin/lstm_model_str.json”)

f.write_text(lstm_model_str)

1637

lstm_model.save_weights(‘mannequin/lstm_model_weights.h5’)

COMPARISON GRAPHn_groups = 2

ind = np.arange(n_groups)

algo = (‘CNN’,’LSTM’)

accuracy = (cnn_score[1], lstm_score[1])

error = (1-cnn_score[1], 1-lstm_score[1])

loss = (cnn_score[0], lstm_score[0])

print(error, accuracy, loss)

(Zero.04976605699702252, Zero.0156315931248473) (Zero.9502339430029775, Zero.9843684068751527) (Zero.18344726768864603, Zero.05474479000914698)

fig, ax = plt.subplots(figsize=(Eight,6), dpi=80)

bar_width = Zero.2

opacity = Zero.9

p1 = plt.bar(ind+bar_width, loss , bar_width, alpha=opacity, coloration=’blue’, label=’loss’)

p1 = plt.bar(ind, error , bar_width, alpha=opacity, coloration=’crimson’, label=’eror’,backside=accuracy)

p2 = plt.bar(ind, accuracy , bar_width, alpha=opacity, coloration=’lime’, label=’accuracy’)

plt.title(‘Comparability Graph’)

plt.xticks(ind+bar_width/2,algo)

plt.xlabel(‘Algorithm’)

plt.ylabel(‘Scores’)

plt.legend()plt.tight_layout()plt.present()TEST MODEL ON NEW DATA

test_data = pd.read_csv(‘enter/test_walk.csv’)

test_data.form(13779, three)

input_data = test_data.loc[0:89,:]

input_data.head()

x-axis y-axis z-axis

Zero Zero.009521 5.468887 7.698410

1 -Zero.194946 5.472244 7.702713

2 -Zero.164063 5.456436 7.709900

three -Zero.213623 5.471512 7.703903

four -Zero.198776 5.495941 7.685471

input_data = input_data.to_numpy().reshape((1,90,three))

input_data.form(1, 90, three)

from keras.fashions import model_from_jsonfrom pathlib import Path

f = Path(‘mannequin/lstm_model_str.json’)

loaded_lstm_str = f.read_text()loaded_lstm = model_from_json(loaded_lstm_str)

loaded_lstm.load_weights(‘mannequin/lstm_model_weights.h5’)

loaded_lstm.abstract()Layer (sort) Output Form Param #

=================================================================

lstm_1 (LSTM) (None, 128) 67584

_________________________________________________________________

dropout_2 (Dropout) (None, 128) Zero

_________________________________________________________________

dense_4 (Dense) (None, 6) 774

=================================================================

Complete params: 68,358

Trainable params: 68,358

Non-trainable params: Zero

predictions = loaded_lstm.predict(input_data)

consequence = predictions[0]

print(consequence)

[1.1413740e-02 three.6693010e-03 9.6202236e-01 three.0899953e-04 Eight.8642472e-03

1.3721360e-02]

most_likely_class_index = int(np.argmax(consequence))

most_likely_class_indexclass_likelihood = consequence[most_likely_class_index]

class_likelihood0.96202236

class_label = class_labels[most_likely_class_index]

print(‘Predicted exercise is :’, class_label)

Predicted exercise is : Sitting

Published by
Essays
View all posts