raML - Near Goals
Big:
1. Model compilation
2. Validation
3. Optimizers
Small:
1. Lambda Layer
2. Data normalization as a layer (maybe?)
Progress so far:
Implemented model compilation. Now, creating a Deep Neural Network is as easy as it is in Keras
model = Sequential([
Dense(size=3, input_shape=X.shape),
Dense(size=1, activation=Sigmoid)
])
model.compile(cost=MSE(), metrics=[RMSE()])
Looks just like Keras, you say? Well good, cause Keras does model creation the right way!
I've also added Relu, but still testing to make sure it's working right. This actually made me realize, I should organize optimizers!
Update After investigating, found out that the problem is most likely in exploding gradients. Didn't expect it to appear that early!
Update 2 Oh this is so cool! After finding out the exploding gradient in a relatively small network, I knew that it probably wasn't due to the learning rate (although making it smaller did help), but rather it was due to weight initialization - that's actually worth writing a separate blog post about, but basically, I used to sample from a uniform 0,1 distribution, but it's much better to sample from a (normal) distribution centered at 0 (note: that doesn't fully solve it, for best performance, one need to take into account variation also, which should depend on layer's depth)