Neural Network - SMS Text Classification#

CreateĀ aĀ machineĀ learningĀ modelĀ thatĀ willĀ classifyĀ SMSĀ messagesĀ asĀ eitherĀ ā€œhamā€Ā orĀ ā€œspamā€.Ā AĀ ā€œhamā€Ā messageĀ isĀ aĀ normalĀ messageĀ sentĀ byĀ aĀ friend.Ā AĀ ā€œspamā€Ā messageĀ isĀ anĀ advertisementĀ or aĀ messageĀ sentĀ byĀ aĀ company.

CreateĀ aĀ functionĀ calledĀ predict_messageĀ thatĀ takesĀ aĀ messageĀ stringĀ asĀ anĀ argumentĀ andĀ returnsĀ aĀ list.Ā TheĀ firstĀ elementĀ inĀ theĀ listĀ shouldĀ beĀ aĀ numberĀ betweenĀ zeroĀ andĀ oneĀ thatĀ indicatesĀ theĀ likelinessĀ ofĀ ā€œhamā€Ā (0)Ā orĀ ā€œspamā€Ā (1).Ā TheĀ secondĀ elementĀ inĀ theĀ listĀ shouldĀ beĀ theĀ wordĀ ā€œhamā€Ā orĀ ā€œspamā€,Ā dependingĀ onĀ whichĀ isĀ mostĀ likely.

Data fromĀ SMSĀ SpamĀ CollectionĀ dataset.

#try:
  # %tensorflow_version only exists in Colab.
#  !pip install tf-nightly
#except Exception:
#  pass
#!pip install tensorflow-datasets
#!pip install wordcloud

#!pip install --upgrade numpy
#!pip install --upgrade pandas
# import libraries
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

import tensorflow as tf
import pandas as pd
from tensorflow import keras
import tensorflow_datasets as tfds
import numpy as np
import matplotlib.pyplot as plt

import seaborn as sns
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator

from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences

# Modeling 
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, GlobalAveragePooling1D, Dense, Dropout, LSTM, Bidirectional, Flatten

print(tf.__version__)
/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm
2.13.0
# get data files
#!wget https://cdn.freecodecamp.org/project-data/sms/train-data.tsv
#!wget https://cdn.freecodecamp.org/project-data/sms/valid-data.tsv

train_file_path = "data/train-data.tsv"
test_file_path = "data/valid-data.tsv"
# load data files
train_file = pd.read_csv(train_file_path, sep='\t', names=["Class", "Message"])
test_file = pd.read_csv(test_file_path, sep='\t', names=["Class", "Message"])

The file is tab separated (\t) and I provide column names ā€œClassā€ and ā€œMessageā€.

Exploratory Data Analysis#

train_file.describe()
Class Message
count 4179 4179
unique 2 3935
top ham sorry, i'll call later
freq 3619 22

There are two unique classes: ā€œhamā€ and ā€œspamā€. There are less unique messages (3935) than total message count (4179) indicating some repeated messages. The top class is ā€œhamā€ and the top message is ā€œsorry, iā€™ll call laterā€.

train_file.groupby("Class").describe().T
Class ham spam
Message count 3619 560
unique 3430 505
top sorry, i'll call later hmv bonus special 500 pounds of genuine hmv vo...
freq 22 3

There are 3619 ham compared to 560 spam messages. This means the data is imbalanced. The most popular ham message ā€œsorry, iā€™ll call laterā€ occured 22 times. Whereas the most popular spam message ā€œhmv bonus special 500 pounds of genuine hmvā€¦ā€ occured 3 times.

Wordcloud#

Wordcloud allows us to visualize the most frequent words in the given text.

# Get all the ham and spam emails
train_ham = train_file[train_file.Class =='ham']
train_spam = train_file[train_file.Class =='spam']
# Create numpy list to visualize using wordcloud
train_ham_text = " ".join(train_ham.Message.to_numpy().tolist())
train_spam_text = " ".join(train_spam.Message.to_numpy().tolist())
# wordcloud of ham messages
ham_msg_cloud = WordCloud(width=520, height=260, stopwords=STOPWORDS, max_font_size=50, 
                          background_color="black", colormap='Blues').generate(train_ham_text)
plt.figure(figsize=(16,10))
plt.imshow(ham_msg_cloud, interpolation='bilinear')
plt.axis('off') # turn off axis
plt.show()
../../_images/482b532c2a5b7f032c30fc57d9e2f7d2c6c4c6be8d2117c5a310d1f4449737de.png

The ham message WordCloud shows that ā€œloveā€, ā€œgoingā€, ā€œcomeā€, ā€œhomeā€, etc. are the most commonly appeared words in ham messages.

# wordcloud of spam messages
spam_msg_cloud = WordCloud(width=520, height=260, stopwords=STOPWORDS, max_font_size=50, 
                           background_color="black", colormap='Blues').generate(train_spam_text)
plt.figure(figsize=(16,10))
plt.imshow(spam_msg_cloud, interpolation='bilinear')
plt.axis('off') # turn off axis
plt.show()
../../_images/d3e74e45f11f4423f74b0fceedb8fba19ad070817547cb9a0569350338619809.png

The spam message WordCloud shows that ā€œfreeā€, ā€œtextā€, ā€œcallā€ etc. are the most commonly appeared words in spam messages.

Data Processing#

Data Imbalance#

# Percentage of spam messages
len(train_spam)/len(train_ham)*100 # 15.48%

# Countplot showing imbalance data
plt.figure(figsize=(8,6))
sns.countplot(x="Class", data=train_file)
plt.show()
../../_images/86d61dfa6b7d35b40a1e20122e45dd81f630e5b68283cdc39321c8446d6d5560.png

The bar chart shows that the classes are imbalanced. 85% of the messages are ham.

Downsampling#

To handle the imbalance data, we downsample. Downsampling randomly deletes some of the observations from the majority class so that the numbers in majority and minority classes are matched.

# Downsample the ham msg for train
train_ham_df = train_ham.sample(n = len(train_spam), random_state = 44)
train_spam_df = train_spam
print(train_ham_df.shape, train_spam_df.shape)
train_df = pd.concat([train_ham_df, train_spam_df]).reset_index(drop=True)
(560, 2) (560, 2)
# Downsample for test file
test_ham = test_file[test_file.Class =='ham']
test_spam = test_file[test_file.Class =='spam']

# Downsample the ham msg for test
test_ham_df = test_ham.sample(n = len(test_spam), random_state = 44)
test_spam_df = test_spam
print(test_ham_df.shape, test_spam_df.shape)
test_df = pd.concat([test_ham_df, test_spam_df]).reset_index(drop=True)
(187, 2) (187, 2)

Test train split#

# Map ham label as 0 and spam as 1
train_df['msg_type']= train_df['Class'].map({'ham': 0, 'spam': 1})
test_df['msg_type']= test_df['Class'].map({'ham': 0, 'spam': 1})


train_msg = train_df['Message']
train_labels = train_df['msg_type'].values
test_msg = test_df['Message']
test_labels = test_df['msg_type'].values

Tokenization#

Tokenizer API from TensorFlow Keras splits sentences into words and encodes them into integers.

  • Tokenize into words

  • Filter out punctuation and rare words

  • Convert all words to lower case and to integer index

# Defining tokenizer hyperparameters
max_len = 50 
trunc_type = "post" 
padding_type = "post" 
oov_tok = "<OOV>" 
vocab_size = 500
tokenizer = Tokenizer(num_words = vocab_size, char_level=False, oov_token = oov_tok)
tokenizer.fit_on_texts(train_msg)
# Get the word_index 
word_index = tokenizer.word_index
# How many words 
tot_words = len(word_index)
print('There are %s unique tokens in training data. ' % tot_words)
There are 3999 unique tokens in training data. 

Sequencing and Padding#

Use texts_tosequences() to espresent each sentence by sequences of numbers from tokenizer object.

Use pad_sequences() so that each sequences have the same length.

# Sequencing and padding on training and testing 
training_sequences = tokenizer.texts_to_sequences(train_msg)
training_padded = pad_sequences (training_sequences, maxlen = max_len, padding = padding_type, truncating = trunc_type)
testing_sequences = tokenizer.texts_to_sequences(test_msg)
testing_padded = pad_sequences(testing_sequences, maxlen = max_len,
                                padding = padding_type, truncating = trunc_type)
# Shape of train tensor
print('Shape of training tensor: ', training_padded.shape)
print('Shape of testing tensor: ', testing_padded.shape)
Shape of training tensor:  (1120, 50)
Shape of testing tensor:  (374, 50)

Dense Spam Detection Model#

Train model using a Dense architecture followed by LSTM and Bi-LSTM.

# hyper-parameters
vocab_size = 500 
embeding_dim = 16
drop_value = 0.2 
n_dense = 24
# Dense model architecture
dense_model = Sequential()
dense_model.add(Embedding(vocab_size, embeding_dim, input_length=max_len))
dense_model.add(GlobalAveragePooling1D())
dense_model.add(Dense(24, activation='relu'))
dense_model.add(Dropout(drop_value))
dense_model.add(Dense(1, activation='sigmoid'))
dense_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fitting a dense spam detector model
num_epochs = 30
early_stop = EarlyStopping(monitor='val_loss', patience=3)
history = dense_model.fit(training_padded, train_labels, epochs=num_epochs, 
                    validation_data=(testing_padded, test_labels), 
                    callbacks =[early_stop], verbose=2)
Epoch 1/30
35/35 - 1s - loss: 0.6878 - accuracy: 0.7366 - val_loss: 0.6782 - val_accuracy: 0.8583 - 914ms/epoch - 26ms/step
Epoch 2/30
35/35 - 0s - loss: 0.6661 - accuracy: 0.8661 - val_loss: 0.6492 - val_accuracy: 0.8583 - 81ms/epoch - 2ms/step
Epoch 3/30
35/35 - 0s - loss: 0.6267 - accuracy: 0.8634 - val_loss: 0.5960 - val_accuracy: 0.8690 - 77ms/epoch - 2ms/step
Epoch 4/30
35/35 - 0s - loss: 0.5586 - accuracy: 0.8884 - val_loss: 0.5144 - val_accuracy: 0.8770 - 75ms/epoch - 2ms/step
Epoch 5/30
35/35 - 0s - loss: 0.4699 - accuracy: 0.8911 - val_loss: 0.4260 - val_accuracy: 0.8824 - 84ms/epoch - 2ms/step
Epoch 6/30
35/35 - 0s - loss: 0.3833 - accuracy: 0.9018 - val_loss: 0.3495 - val_accuracy: 0.8957 - 83ms/epoch - 2ms/step
Epoch 7/30
35/35 - 0s - loss: 0.3128 - accuracy: 0.9179 - val_loss: 0.2915 - val_accuracy: 0.8984 - 75ms/epoch - 2ms/step
Epoch 8/30
35/35 - 0s - loss: 0.2615 - accuracy: 0.9312 - val_loss: 0.2498 - val_accuracy: 0.9091 - 83ms/epoch - 2ms/step
Epoch 9/30
35/35 - 0s - loss: 0.2230 - accuracy: 0.9339 - val_loss: 0.2184 - val_accuracy: 0.9118 - 84ms/epoch - 2ms/step
Epoch 10/30
35/35 - 0s - loss: 0.1911 - accuracy: 0.9491 - val_loss: 0.1972 - val_accuracy: 0.9198 - 86ms/epoch - 2ms/step
Epoch 11/30
35/35 - 0s - loss: 0.1717 - accuracy: 0.9563 - val_loss: 0.1755 - val_accuracy: 0.9225 - 84ms/epoch - 2ms/step
Epoch 12/30
35/35 - 0s - loss: 0.1511 - accuracy: 0.9580 - val_loss: 0.1612 - val_accuracy: 0.9305 - 83ms/epoch - 2ms/step
Epoch 13/30
35/35 - 0s - loss: 0.1340 - accuracy: 0.9607 - val_loss: 0.1494 - val_accuracy: 0.9385 - 83ms/epoch - 2ms/step
Epoch 14/30
35/35 - 0s - loss: 0.1274 - accuracy: 0.9643 - val_loss: 0.1412 - val_accuracy: 0.9385 - 82ms/epoch - 2ms/step
Epoch 15/30
35/35 - 0s - loss: 0.1107 - accuracy: 0.9688 - val_loss: 0.1333 - val_accuracy: 0.9439 - 81ms/epoch - 2ms/step
Epoch 16/30
35/35 - 0s - loss: 0.1126 - accuracy: 0.9652 - val_loss: 0.1286 - val_accuracy: 0.9439 - 81ms/epoch - 2ms/step
Epoch 17/30
35/35 - 0s - loss: 0.1012 - accuracy: 0.9705 - val_loss: 0.1236 - val_accuracy: 0.9492 - 75ms/epoch - 2ms/step
Epoch 18/30
35/35 - 0s - loss: 0.0938 - accuracy: 0.9705 - val_loss: 0.1187 - val_accuracy: 0.9572 - 83ms/epoch - 2ms/step
Epoch 19/30
35/35 - 0s - loss: 0.0841 - accuracy: 0.9705 - val_loss: 0.1147 - val_accuracy: 0.9599 - 81ms/epoch - 2ms/step
Epoch 20/30
35/35 - 0s - loss: 0.0812 - accuracy: 0.9768 - val_loss: 0.1129 - val_accuracy: 0.9572 - 76ms/epoch - 2ms/step
Epoch 21/30
35/35 - 0s - loss: 0.0801 - accuracy: 0.9714 - val_loss: 0.1104 - val_accuracy: 0.9626 - 75ms/epoch - 2ms/step
Epoch 22/30
35/35 - 0s - loss: 0.0723 - accuracy: 0.9777 - val_loss: 0.1068 - val_accuracy: 0.9599 - 79ms/epoch - 2ms/step
Epoch 23/30
35/35 - 0s - loss: 0.0722 - accuracy: 0.9777 - val_loss: 0.1055 - val_accuracy: 0.9626 - 81ms/epoch - 2ms/step
Epoch 24/30
35/35 - 0s - loss: 0.0655 - accuracy: 0.9804 - val_loss: 0.1032 - val_accuracy: 0.9626 - 82ms/epoch - 2ms/step
Epoch 25/30
35/35 - 0s - loss: 0.0661 - accuracy: 0.9821 - val_loss: 0.1020 - val_accuracy: 0.9626 - 84ms/epoch - 2ms/step
Epoch 26/30
35/35 - 0s - loss: 0.0605 - accuracy: 0.9821 - val_loss: 0.1005 - val_accuracy: 0.9626 - 72ms/epoch - 2ms/step
Epoch 27/30
35/35 - 0s - loss: 0.0551 - accuracy: 0.9848 - val_loss: 0.1011 - val_accuracy: 0.9599 - 73ms/epoch - 2ms/step
Epoch 28/30
35/35 - 0s - loss: 0.0563 - accuracy: 0.9848 - val_loss: 0.1004 - val_accuracy: 0.9626 - 82ms/epoch - 2ms/step
Epoch 29/30
35/35 - 0s - loss: 0.0536 - accuracy: 0.9857 - val_loss: 0.1010 - val_accuracy: 0.9626 - 82ms/epoch - 2ms/step
Epoch 30/30
35/35 - 0s - loss: 0.0550 - accuracy: 0.9839 - val_loss: 0.0994 - val_accuracy: 0.9679 - 82ms/epoch - 2ms/step
# Model performance on test data 
dense_model.evaluate(testing_padded, test_labels)
 1/12 [=>............................] - ETA: 0s - loss: 0.1385 - accuracy: 0.9375

12/12 [==============================] - 0s 1ms/step - loss: 0.0994 - accuracy: 0.9679
[0.09943579882383347, 0.9679144620895386]
# Read as a dataframe 
metrics = pd.DataFrame(history.history)
# Rename column
metrics.rename(columns = {'loss': 'Training_Loss', 'accuracy': 'Training_Accuracy', 'val_loss': 'Validation_Loss', 'val_accuracy': 'Validation_Accuracy'}, inplace = True)
def plot_dense(var1, var2, string):
    metrics[[var1, var2]].plot()
    plt.title('Training and Validation ' + string)
    plt.xlabel ('Number of epochs')
    plt.ylabel(string)
    plt.legend([var1, var2])

plot_dense('Training_Loss', 'Validation_Loss', 'loss')
plot_dense('Training_Accuracy', 'Validation_Accuracy', 'accuracy')
../../_images/bb027252f8ad3cb58b5e89406408d0536b3968ef7be52c74e30292e9a60df2ca.png ../../_images/654f2a70000ea7e1516fb913b4046e8bbac903126b6f72904d0bceb67277907b.png

Long Short Term Memory (LSTM) Model#

This neural network is capable of learning order dependence in sequence prediction problems.

# LSTM hyperparameters
n_lstm = 20
drop_lstm =0.2
# LSTM Spam detection architecture
LSTM_model = Sequential()
LSTM_model.add(Embedding(vocab_size, embeding_dim, input_length=max_len))
LSTM_model.add(LSTM(n_lstm, dropout=drop_lstm, return_sequences=True))
LSTM_model.add(LSTM(n_lstm, dropout=drop_lstm, return_sequences=True))
LSTM_model.add(Dense(1, activation='sigmoid'))
LSTM_model.add(GlobalAveragePooling1D())
#LSTM_model.add(Flatten())
LSTM_model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics=['accuracy'])
num_epochs = 30
early_stop = EarlyStopping(monitor='val_loss', patience=2)
history = LSTM_model.fit(training_padded, train_labels, epochs=num_epochs, validation_data=(testing_padded, test_labels), callbacks =[early_stop], verbose=0)
# Create a dataframe
metrics = pd.DataFrame(history.history)
# Rename column
metrics.rename(columns = {'loss': 'Training_Loss', 'accuracy': 'Training_Accuracy',
                         'val_loss': 'Validation_Loss', 'val_accuracy': 'Validation_Accuracy'}, inplace = True)
def plot_LSTM(var1, var2, string):
    metrics[[var1, var2]].plot()
    plt.title('LSTM Model: Training and Validation ' + string)
    plt.xlabel ('Number of epochs')
    plt.ylabel(string)
    plt.legend([var1, var2])
plot_LSTM('Training_Loss', 'Validation_Loss', 'loss')
plot_LSTM('Training_Accuracy', 'Validation_Accuracy', 'accuracy')
../../_images/d35671dbf3742d1aefb96182c943c924e74ec82b5e41bf55d230fde9e61e292d.png ../../_images/9b9fad7ae529a6d10bcbb7bc51c350c78a8df03ed52e80c501aef6a166ee7a37.png

Bi-directional Long Short Term Memory (BiLSTM) Model#

Bi-LSTM learns patterns from before and after a given token.

# Biderectional LSTM Spam detection architecture
biLSTM_model = Sequential()
biLSTM_model.add(Embedding(vocab_size, embeding_dim, input_length=max_len))
biLSTM_model.add(Bidirectional(LSTM(n_lstm, dropout=drop_lstm, return_sequences=True)))
biLSTM_model.add(Dense(1, activation='sigmoid'))
biLSTM_model.add(GlobalAveragePooling1D())
#biLSTM_model.add(Flatten())
biLSTM_model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics=['accuracy'])
# Training
num_epochs = 30
early_stop = EarlyStopping(monitor='val_loss', patience=2)
history = biLSTM_model.fit(training_padded, train_labels, epochs=num_epochs, 
                    validation_data=(testing_padded, test_labels),callbacks =[early_stop], verbose=0)
# Create a dataframe
metrics = pd.DataFrame(history.history)
# Rename column
metrics.rename(columns = {'loss': 'Training_Loss', 'accuracy': 'Training_Accuracy',
                         'val_loss': 'Validation_Loss', 'val_accuracy': 'Validation_Accuracy'}, inplace = True)
def plot_biLSTM(var1, var2, string):
    metrics[[var1, var2]].plot()
    plt.title('BiLSTM Model: Training and Validation ' + string)
    plt.xlabel ('Number of epochs')
    plt.ylabel(string)
    plt.legend([var1, var2])
# Plot
plot_biLSTM('Training_Loss', 'Validation_Loss', 'loss')
plot_biLSTM('Training_Accuracy', 'Validation_Accuracy', 'accuracy')
../../_images/6c32454bc17876345cdfaebae63789e23b5ac9595d1ccd12f387e503e1d69226.png ../../_images/23d6623e6d905eb18caf7078a2b7ee173446b06a08fc64a21675f2f5475c6f41.png

Choose Model#

# Comparing three different models
print(f"Dense architecture loss and accuracy: {dense_model.evaluate(testing_padded, test_labels)} " )
print(f"LSTM architecture loss and accuracy: {LSTM_model.evaluate(testing_padded, test_labels)} " )
print(f"Bi-LSTM architecture loss and accuracy: {biLSTM_model.evaluate(testing_padded, test_labels)} " )
 1/12 [=>............................] - ETA: 0s - loss: 0.1385 - accuracy: 0.9375

12/12 [==============================] - 0s 1ms/step - loss: 0.0994 - accuracy: 0.9679
Dense architecture loss and accuracy: [0.09943579882383347, 0.9679144620895386] 

 1/12 [=>............................] - ETA: 0s - loss: 0.4290 - accuracy: 0.8750

 9/12 [=====================>........] - ETA: 0s - loss: 0.2255 - accuracy: 0.9375

12/12 [==============================] - 0s 7ms/step - loss: 0.1921 - accuracy: 0.9492
LSTM architecture loss and accuracy: [0.19211576879024506, 0.9491978883743286] 
 1/12 [=>............................] - ETA: 0s - loss: 0.3149 - accuracy: 0.9062

12/12 [==============================] - ETA: 0s - loss: 0.1196 - accuracy: 0.9599

12/12 [==============================] - 0s 5ms/step - loss: 0.1196 - accuracy: 0.9599
Bi-LSTM architecture loss and accuracy: [0.11963865906000137, 0.9598930478096008] 

Based on loss, accuracy and the plots above, we select the Dense architecture as the final model for classifying whether messages are spam or ham.

Prediction#

# function to predict messages based on model
# (should return list containing prediction and label, ex. [0.008318834938108921, 'ham'])
def predict_message(pred_text):
    pred_text = [pred_text]
    new_seq = tokenizer.texts_to_sequences(pred_text)
    padded = pad_sequences(new_seq, maxlen=max_len,
                      padding = padding_type,
                      truncating=trunc_type)
    prediction = (dense_model.predict(padded))[0][0]
    if prediction < 0.5:
        return [prediction, 'ham']
    else:
        return [prediction, 'spam']
pred_text = "how are you doing today?"

prediction = predict_message(pred_text)
print(prediction)
1/1 [==============================] - ETA: 0s

1/1 [==============================] - 0s 83ms/step
[0.0077273077, 'ham']
# Run this cell to test your function and model. 
def test_predictions():
    test_messages = ["how are you doing today",
                   "sale today! to stop texts call 98912460324",
                   "i dont want to go. can we try it a different day? available sat",
                   "our new mobile video service is live. just install on your phone to start watching.",
                   "you have won Ā£1000 cash! call to claim your prize.",
                   "i'll bring it tomorrow. don't forget the milk.",
                   "wow, is your arm alright. that happened to me one time too"
                  ]

    test_answers = ["ham", "spam", "ham", "spam", "spam", "ham", "ham"]
    passed = True

    for msg, ans in zip(test_messages, test_answers):
        prediction = predict_message(msg)
        if prediction[1] != ans:
            passed = False

    if passed:
        print("You passed the challenge. Great job!")
    else:
        print("You haven't passed yet. Keep trying.")

test_predictions()
1/1 [==============================] - ETA: 0s

1/1 [==============================] - 0s 19ms/step
1/1 [==============================] - ETA: 0s

1/1 [==============================] - 0s 18ms/step
1/1 [==============================] - ETA: 0s

1/1 [==============================] - 0s 18ms/step
1/1 [==============================] - ETA: 0s

1/1 [==============================] - 0s 17ms/step
1/1 [==============================] - ETA: 0s

1/1 [==============================] - 0s 17ms/step
1/1 [==============================] - ETA: 0s

1/1 [==============================] - 0s 17ms/step
1/1 [==============================] - ETA: 0s

1/1 [==============================] - 0s 19ms/step
You passed the challenge. Great job!