Escherboard is our neural network editor, which helps you in coming up with neural network architechtures. This way you can iterate faster on different neural networks. Currently we support any directional acyclic graph made with the layers with valid config.
You simply give us the data location, we upload it and preprocess according to the transforms applied and save it. We download this data to the respective machine you start a new experiment training on this dataset.
While starting an experiment, you can essentially give us a comma seperated values of what a hyperparmeter should be and we start training on all the variants of the experiment simultaneously and present you the training results.
Once the training started, you can enable notifications based on an error metric threshold. You can also get notifications during events like gradients vanishing or compute resources being underutilized of a given variant.
You can load existing models on to the escherboard along with their trained weights, and build up on that network, to come up with new features and models.
Depending on the type of the problem and the size of the datasets, you can decide what type of infrastructure a given experiment should be run.
During training, you can understand visually how the gradients are propagating through the network so that you can debug things like vanishing gradients. You are also presented with data that show you how efficiently compute resources are being used.
You can export a model with or without weights at every epoch of the training.
Our reinforcement learning solution is based on OPENAI's RLLAB. You can train various rl algorithms on most of the rl environments from openai gym. You would be able to do hyperparameter search and launch on appropriate machines and get notified about thresholds you are interested in.
These are the typical steps involved in making deep learning a breeze using Eschernode.
Add the layers provided on to the escherboard and build your neural network. Any directional acyclic graph formed using the layers with proper config is supported right now. So complex architectures like RESNET can be built without writing a single line of code. You can save this network in your model zoo.Create your neural network
Upload a dataset, or use one of the existing datasets, and create a pipeline to do feature engineering/clean the data to trainCreate a data pipeline
Start the training process using a model from the model zoo and a dataset from datasets. For a single experiment we support at most 9 variants to run at a time. You just specify different values a given parameter takes and we start all the variants.Hyperparameter Sweep
You can specify thresholds on loss/rewards and can be notified when those are thresholds are reached. You can track how different variants are doing on the experiment page. You will even be notified when we encounter events like gradients vanishing at a given layer and can be visualized on the GUI.Notifications and Results.
You can export the trained weights and use or just enable a rest api on the dashboard that handles the predictionsModel Export and api.