So I am experimenting with ml5 and sign language and right now I am trying to make a model learn off of moving signs. My code makes mediapipe collect 3780 frames, process these and save them in a variable that looks like this:
{
"label": "wave",
"vector": [
0.7023592591285706,
0.9515798091888428,
(etc. (3790 in total)
]
I have 3 poses with about 35 examples in total. When I make ML5 train (with a neural network) my loss value appears to be going up.
I have the following parameters to my model:
Learning rate: 0.15
epochs: 30
hidden units: 20
What is causing my learning rate to go up? Could this be the result of a small batch size?